Topic 1 - Exam A

Question #1 Topic 1

A company is implementing an application on Amazon EC2 instances. The application needs to process incoming transactions. When the application detects a transaction that is not valid, the application must send a chat message to the company's support team. To send the message, the application needs to retrieve the access token to authenticate by using the chat API.
A developer needs to implement a solution to store the access token. The access token must be encrypted at rest and in transit. The access token must also be accessible from other AWS accounts.
Which solution will meet these requirements with the LEAST management overhead?

  • A. Use an AWS Systems Manager Parameter Store SecureString parameter that uses an AWS Key Management Service (AWS KMS) AWS managed key to store the access token. Add a resource-based policy to the parameter to allow access from other accounts. Update the IAM role of the EC2 instances with permissions to access Parameter Store. Retrieve the token from Parameter Store with the decrypt flag enabled. Use the decrypted access token to send the message to the chat.
  • B. Encrypt the access token by using an AWS Key Management Service (AWS KMS) customer managed key. Store the access token in an Amazon DynamoDB table. Update the IAM role of the EC2 instances with permissions to access DynamoDB and AWS KMS. Retrieve the token from DynamoDDecrypt the token by using AWS KMS on the EC2 instances. Use the decrypted access token to send the message to the chat.
  • C. Use AWS Secrets Manager with an AWS Key Management Service (AWS KMS) customer managed key to store the access token. Add a resource-based policy to the secret to allow access from other accounts. Update the IAM role of the EC2 instances with permissions to access Secrets Manager. Retrieve the token from Secrets Manager. Use the decrypted access token to send the message to the chat.
  • D. Encrypt the access token by using an AWS Key Management Service (AWS KMS) AWS managed key. Store the access token in an Amazon S3 bucket. Add a bucket policy to the S3 bucket to allow access from other accounts. Update the IAM role of the EC2 instances with permissions to access Amazon S3 and AWS KMS. Retrieve the token from the S3 bucket. Decrypt the token by using AWS KMS on the EC2 instances. Use the decrypted access token to send the massage to the chat.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
C (85%)
Other

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 3 weeks ago
Selected Answer: C
The correct answer is C. https://aws.amazon.com/premiumsupport/knowledge-center/secrets-manager-share-between-accounts/ https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_examples_cross.html Option A is wrong. It seems to be a good solution. However, AWS managed keys cannot be used for cross account accessing.
upvoted 19 times
jipark
3 months ago
cross account, rotate is key for 'Security Manager'
upvoted 2 times
...
CyberBaby803
7 months, 2 weeks ago
Based on this AWS managed keys can be used for cross account accessing. https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.html
upvoted 2 times
AgboolaKun
4 months, 3 weeks ago
I am not sure if the documentation you provided specifically say that AWS managed keys can be used for cross account accessing. However, @Untamables explanation is on point. Please see this Stack Overflow thread - https://stackoverflow.com/questions/63420732/sharing-an-aws-managed-kms-key-with-another-account
upvoted 1 times
...
...
...
geekdamsel
Highly Voted 6 months ago
The questions came in my exam.Correct answer is C.
upvoted 7 times
...
dongocanh272
Most Recent 3 days, 2 hours ago
Selected Answer: D
I think using S3 to store and KMS to decrypt is the solution for this requirement
upvoted 1 times
...
cgpt
6 days, 22 hours ago
Selected Answer: A
By default, AWS Systems Manager Parameter Store does not natively support cross-account access for SecureString parameters. However, you can configure cross-account access to SecureString parameters by sharing the KMS key with the target AWS accounts. To do this, you need to create a resource-based KMS key policy that allows access to the key by the external AWS account(s). After configuring the KMS key policy to allow the necessary cross-account access, you can grant IAM roles in the target accounts permission to access the SecureString parameters that are encrypted using that KMS key.
upvoted 1 times
...
ssnei
3 weeks ago
Can anyone please send me the pdf for all the questions Really appreciate your help Email: snik2309@gmail.com
upvoted 3 times
...
Digo30sp
1 month ago
Selected Answer: C
Answer C is correct
upvoted 1 times
...
soumyaranjan7
1 month ago
Can anyone please send me the pdf of this whole questions. I have only 2 weeks to pass it. Thanks in advance.It would be a great help. email- soumya.cr17@gmail.com
upvoted 1 times
...
huyhq
1 month ago
Selected Answer: C
i think c is correct
upvoted 1 times
...
NinjaCloud
1 month, 1 week ago
Since the question says LEAST Management Overhead. The answer cannot be B or C because they suggest to use "AWS Key Management Service (AWS KMS) customer managed key ". Answer should be A.
upvoted 1 times
...
Meghna17
1 month, 1 week ago
I will really appreciate if someone can send me the pdf or list of all the questions on my email - mdmeghna05@gmail.com ,as Exam topics is not giving access after page 18 because I have not purchased the contributors access. Thanks in advance.
upvoted 1 times
...
Nav16011991
1 month, 1 week ago
Selected Answer: C
The correct answer is C.
upvoted 1 times
...
Shreya_aspire
1 month, 3 weeks ago
Selected Answer: C
https://aws.amazon.com/blogs/security/how-to-access-secrets-across-aws-accounts-by-attaching-resource-based-policies/
upvoted 1 times
...
Nancy_1312
1 month, 3 weeks ago
Can someone send me the pdf at aroranancy561@gmail.com?
upvoted 1 times
...
hsinchang
1 month, 4 weeks ago
Selected Answer: C
Cross Account + Rotation = Secrets Manager
upvoted 1 times
...
Kashan6109
2 months ago
Selected Answer: C
Secret Manager allows you to share secret cross account
upvoted 1 times
...
justpassaws3122
2 months, 1 week ago
Is it possible by any chance to get the full pdf exam to my email, pls? awstrainingsachin@gmail.com, i am having exam tomorrow ,only getting 40% in practice test
upvoted 1 times
...
bestrus
2 months, 2 weeks ago
Hey there, buddy! Slam that keyboard and shoot me that precious PDF at paolinodidemacia@hotmail.com . Make your code shine and brighten up my day!
upvoted 1 times
...
Question #2 Topic 1

A company is running Amazon EC2 instances in multiple AWS accounts. A developer needs to implement an application that collects all the lifecycle events of the EC2 instances. The application needs to store the lifecycle events in a single Amazon Simple Queue Service (Amazon SQS) queue in the company's main AWS account for further processing.
Which solution will meet these requirements?

  • A. Configure Amazon EC2 to deliver the EC2 instance lifecycle events from all accounts to the Amazon EventBridge event bus of the main account. Add an EventBridge rule to the event bus of the main account that matches all EC2 instance lifecycle events. Add the SQS queue as a target of the rule.
  • B. Use the resource policies of the SQS queue in the main account to give each account permissions to write to that SQS queue. Add to the Amazon EventBridge event bus of each account an EventBridge rule that matches all EC2 instance lifecycle events. Add the SQS queue in the main account as a target of the rule.
  • C. Write an AWS Lambda function that scans through all EC2 instances in the company accounts to detect EC2 instance lifecycle changes. Configure the Lambda function to write a notification message to the SQS queue in the main account if the function detects an EC2 instance lifecycle change. Add an Amazon EventBridge scheduled rule that invokes the Lambda function every minute.
  • D. Configure the permissions on the main account event bus to receive events from all accounts. Create an Amazon EventBridge rule in each account to send all the EC2 instance lifecycle events to the main account event bus. Add an EventBridge rule to the main account event bus that matches all EC2 instance lifecycle events. Set the SQS queue as a target for the rule.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (78%)
B (17%)
4%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 3 weeks ago
Selected Answer: D
The correct answer is D. Amazon EC2 instances can send the state-change notification events to Amazon EventBridge. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-instance-state-changes.html Amazon EventBridge can send and receive events between event buses in AWS accounts. https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-cross-account.html
upvoted 14 times
jipark
3 months ago
thanks a lot
upvoted 1 times
...
...
geekdamsel
Highly Voted 6 months ago
This question came in exam. Correct answer is D.
upvoted 9 times
...
dongocanh272
Most Recent 3 days, 2 hours ago
Selected Answer: D
My answer is D
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: D
Answer C is correct
upvoted 1 times
...
TeeTheMan
3 months, 1 week ago
Selected Answer: B
Seems to me the correct answer is B. The current most voted answer is B, but can someone explain why it’s better than B? I think B is better because it has fewer steps. The events go straight from each account into the queue. Unlike in D which has the intermediate step of the event bus of the main account. Also, why would you want to pollute the event bus of the main account with events from other accounts when it isn’t necessary?
upvoted 3 times
...
KillThemWithKindness
3 months, 3 weeks ago
B Answer A is incorrect because Amazon EventBridge events can't be sent directly from one account's event bus to another. Answer C is incorrect because it's unnecessary and inefficient to use Lambda to periodically scan all EC2 instances for lifecycle changes. Amazon EventBridge can capture these events automatically as they occur. Answer D is incorrect because it is not possible to configure the main account event bus to receive events from all accounts directly, and Amazon EventBridge events can't be sent directly from one account's event bus to another. The EventBridge rules need to be set up in the accounts where the events are generated.
upvoted 2 times
KillThemWithKindness
3 months, 3 weeks ago
Sorry Im wrong, AWS allow to send and receive Amazon EventBridge events between AWS accounts. https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-cross-account.html Both B and D works, but D is more centralized
upvoted 4 times
...
...
ezredame
5 months, 1 week ago
Selected Answer: D
The correct answer is D. https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-cross-account.html
upvoted 1 times
...
Bibay
6 months ago
Selected Answer: A
Option D is not the best solution because it involves configuring the permissions on the main account's EventBridge event bus to receive events from all accounts, which can lead to potential security risks. Allowing other AWS accounts to send events to the main account's EventBridge event bus can potentially open up a security vulnerability, as it increases the attack surface area for the main account. On the other hand, option A is the best solution because it involves using Amazon EventBridge, which is a serverless event bus that can be used to route events between AWS services or AWS accounts. By configuring Amazon EC2 to deliver the EC2 instance lifecycle events from all accounts to the Amazon EventBridge event bus of the main account, and adding the SQS queue as a target of the rule, the application can collect all the lifecycle events of the EC2 instances in a single queue in the main account without compromising the security posture of the AWS environment.
upvoted 1 times
...
ihebchorfi
6 months, 1 week ago
Selected Answer: B
B solution meets all da requirements. By using resource policies, you can grant permissions for other accounts to write to the SQS queue in the main account. Then, you create EventBridge rules in each account dat match EC2 lifecycle events and use da main account's SQS queue as a target for these rules. It's da best choice for dis scenario.
upvoted 1 times
...
MrTee
6 months, 2 weeks ago
Selected Answer: D
This solution allows the collection of all the lifecycle events of the EC2 instances from multiple AWS accounts and stores them in a single Amazon SQS queue in the company’s main AWS account for further processing
upvoted 1 times
...
shahs10
7 months, 1 week ago
For Option C using lambda does not seem to be a good solution as we would have to trigger lambda on some schedule and it will has less granularity in time. For D. Why would we be matching EC2 instance lifecycle events in Main account event bus and not in each account event bus and reducing overhead for main account
upvoted 1 times
...
good_
7 months, 3 weeks ago
I think the answer to this question is also A.
upvoted 4 times
...
haaris786
7 months, 3 weeks ago
Answer A: This makes more sense and a simplified solution.
upvoted 4 times
...
aragon_saa
7 months, 3 weeks ago
D https://www.examtopics.com/discussions/amazon/view/96209-exam-aws-certified-developer-associate-topic-1-question-396/
upvoted 3 times
...
Question #3 Topic 1

An application is using Amazon Cognito user pools and identity pools for secure access. A developer wants to integrate the user-specific file upload and download features in the application with Amazon S3. The developer must ensure that the files are saved and retrieved in a secure manner and that users can access only their own files. The file sizes range from 3 KB to 300 MB.
Which option will meet these requirements with the HIGHEST level of security?

  • A. Use S3 Event Notifications to validate the file upload and download requests and update the user interface (UI).
  • B. Save the details of the uploaded files in a separate Amazon DynamoDB table. Filter the list of files in the user interface (UI) by comparing the current user ID with the user ID associated with the file in the table.
  • C. Use Amazon API Gateway and an AWS Lambda function to upload and download files. Validate each request in the Lambda function before performing the requested operation.
  • D. Use an IAM policy within the Amazon Cognito identity prefix to restrict users to use their own folders in Amazon S3.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (85%)
Other

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 3 weeks ago
Selected Answer: D
D I actually apply this solution the production applications. Examples https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_s3_cognito-bucket.html https://docs.amplify.aws/lib/storage/getting-started/q/platform/js/
upvoted 7 times
...
bala30
Most Recent 1 day, 5 hours ago
Can someone email me a pdf of the questions (DVA-C02 & DVA-C01) at balajisudharson@gmail.com Thanks in advance!
upvoted 1 times
...
dongocanh272
3 days, 1 hour ago
Selected Answer: B
I consider between B & D
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: D
Answer D is correct
upvoted 1 times
...
Bibay
6 months ago
Selected Answer: C
D is not the best option as IAM policies only apply to actions taken through AWS Management Console, SDKs, and CLI. It does not apply to direct access to S3 from the application. Option B is a good approach, but it requires additional overhead to manage the DynamoDB table. Option A is also a possible solution but only provides limited security as it only validates the upload and download requests, and it does not provide user-level authorization. Option C is the best choice as it allows the developer to implement a custom authentication mechanism in the Lambda function, providing the highest level of security. The authentication mechanism can be integrated with Amazon Cognito user pools and identity pools to authenticate users and ensure that only the owner of the file can upload and download it.
upvoted 1 times
grzess
5 months, 3 weeks ago
Implementing custom authentication / authorization solution is extremely bad practice. Any developers is prone to mistakes. It's always better to trust the dedicated solution. Thus option C is definitely not the correct one.
upvoted 1 times
...
...
MrTee
6 months, 2 weeks ago
Selected Answer: D
This solution ensures that users can access only their own files in a secure manner.
upvoted 3 times
...
haaris786
7 months, 3 weeks ago
Answer D: https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-integrating-user-pools-with-identity-pools.html
upvoted 3 times
...
Question #4 Topic 1

A company is building a scalable data management solution by using AWS services to improve the speed and agility of development. The solution will ingest large volumes of data from various sources and will process this data through multiple business rules and transformations.
The solution requires business rules to run in sequence and to handle reprocessing of data if errors occur when the business rules run. The company needs the solution to be scalable and to require the least possible maintenance.
Which AWS service should the company use to manage and automate the orchestration of the data flows to meet these requirements?

  • A. AWS Batch
  • B. AWS Step Functions
  • C. AWS Glue
  • D. AWS Lambda
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
B (83%)
Other

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
geekdamsel
Highly Voted 6 months ago
Got this question in exam.Correct answer is B.
upvoted 7 times
...
dongocanh272
Most Recent 3 days, 1 hour ago
Selected Answer: B
My answer is B
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: B
B is correct
upvoted 1 times
...
NinjaCloud
1 month, 1 week ago
Best option: B
upvoted 1 times
...
panoptica
1 month, 4 weeks ago
Selected Answer: B
b init
upvoted 1 times
...
sharma_ps93
2 months ago
The answer is B(Step Functions). For people confused with AWS Lambda, it is a compute service and can be used within Step Functions, but it alone does not provide the orchestration and error handling features required in this case.
upvoted 2 times
...
casharan
2 months, 1 week ago
Selected Answer: D
check the link below: https://docs.aws.amazon.com/lambda/latest/operatorguide/orchestration.html
upvoted 1 times
pefey26437
1 month, 1 week ago
My man.. in your link , 4th line, it says Step function.
upvoted 2 times
casharan
2 weeks, 3 days ago
Thanks. You're right.
upvoted 1 times
...
...
...
hmdev
2 months, 1 week ago
Selected Answer: B
You can use Step functions to create a workflow of functions that should be invoked in a sequence. You can also push output from one one-step function and use it as an input for next-step function. Also, Step functions have very useful Retry and Catch -> error-handling features.
upvoted 1 times
...
jayvarma
3 months ago
Keyword: run in sequence and to handle reprocessing of data. So, answer is option B. And also each task in a step function can be handled by a different AWS Service such as AWS Lambda or AWS Glue which is used for ETL jobs.
upvoted 1 times
...
elfinka9
3 months, 1 week ago
Selected Answer: B
I'm thinking B
upvoted 1 times
...
Suvomita
4 months ago
Selected Answer: D
D is the right answer
upvoted 1 times
...
MatthewHuiii
4 months, 2 weeks ago
B is correct
upvoted 1 times
...
Baba_Eni
5 months, 1 week ago
Selected Answer: B
All the key words of the question points at Step Function, check the link below: https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html
upvoted 2 times
jipark
3 months ago
"manage and automate the orchestration of the data flows"
upvoted 1 times
...
...
ricky536
5 months, 1 week ago
B is correct
upvoted 1 times
...
ihebchorfi
6 months, 1 week ago
Selected Answer: B
Easily B
upvoted 1 times
...
MrTee
6 months, 2 weeks ago
Selected Answer: B
Option B is the correct choice. AWS Step Functions allows you to coordinate multiple AWS services into serverless workflows so you can build and update apps quickly. It also provides a way to handle errors and retry failed steps, making it a good fit for the company’s requirements.
upvoted 2 times
...
MrTee
6 months, 3 weeks ago
Selected Answer: C
Question talks of ingesting huge volumes of data and orchestrating data flows, keywords for aws glue. I go with C
upvoted 1 times
rlnd2000
6 months, 1 week ago
Glue is an ETL tool it is not for orchestration of data flows, Step Function is for orchestration I think Glue is not the best option here.
upvoted 2 times
...
...
Question #5 Topic 1

A developer has created an AWS Lambda function that is written in Python. The Lambda function reads data from objects in Amazon S3 and writes data to an Amazon DynamoDB table. The function is successfully invoked from an S3 event notification when an object is created. However, the function fails when it attempts to write to the DynamoDB table.
What is the MOST likely cause of this issue?

  • A. The Lambda function's concurrency limit has been exceeded.
  • B. DynamoDB table requires a global secondary index (GSI) to support writes.
  • C. The Lambda function does not have IAM permissions to write to DynamoDB.
  • D. The DynamoDB table is not running in the same Availability Zone as the Lambda function.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dongocanh272
3 days, 1 hour ago
Selected Answer: C
I think C is correct.
upvoted 1 times
...
chvtejaswi
1 month, 4 weeks ago
Selected Answer: C
correct answer is C
upvoted 3 times
...
hsinchang
1 month, 4 weeks ago
Selected Answer: C
It is clearly something about permissions. So not A or B. Lambda functions can run in multiple Availability Zones (AZs) to ensure high availability and resilience. So it is not D.
upvoted 3 times
...
kvpa
2 months, 3 weeks ago
Selected Answer: C
correct answer is C
upvoted 1 times
...
ssoratroi
2 months, 4 weeks ago
Selected Answer: C
surely C
upvoted 1 times
...
elfinka9
3 months, 1 week ago
Does anyone know how the correct answer is determined? Option C is the most voted and correct according to https://www.examtopics.com/discussions/amazon/view/88237-exam-aws-certified-developer-associate-topic-1-question-164/
upvoted 2 times
...
geekdamsel
6 months ago
Got this question in exam. Correct answer is C.
upvoted 4 times
...
MrTee
6 months, 2 weeks ago
Selected Answer: C
The Lambda function needs to have the appropriate IAM permissions to write to the DynamoDB table. If the function does not have these permissions, it will fail when it attempts to write to the table.
upvoted 1 times
...
zk1200
6 months, 4 weeks ago
Selected Answer: C
C is the simples answer
upvoted 2 times
...
khaled1123
7 months ago
Selected Answer: C
of course C
upvoted 2 times
...
TungNNS
7 months, 1 week ago
Selected Answer: C
No doubt C
upvoted 2 times
...
ihta_2031
7 months, 1 week ago
Selected Answer: C
C is the answer
upvoted 2 times
...
Untamables
7 months, 3 weeks ago
Selected Answer: C
No doubt C
upvoted 2 times
...
svrnvtr
7 months, 3 weeks ago
Selected Answer: C
It is C
upvoted 2 times
...
prabhay786
7 months, 3 weeks ago
I will go for C too
upvoted 2 times
...
haaris786
7 months, 3 weeks ago
I will go for C with this one.
upvoted 3 times
...
aragon_saa
7 months, 3 weeks ago
C https://www.examtopics.com/discussions/amazon/view/88237-exam-aws-certified-developer-associate-topic-1-question-164/
upvoted 3 times
...
Question #6 Topic 1

A developer is creating an AWS CloudFormation template to deploy Amazon EC2 instances across multiple AWS accounts. The developer must choose the EC2 instances from a list of approved instance types.
How can the developer incorporate the list of approved instance types in the CloudFormation template?

  • A. Create a separate CloudFormation template for each EC2 instance type in the list.
  • B. In the Resources section of the CloudFormation template, create resources for each EC2 instance type in the list.
  • C. In the CloudFormation template, create a separate parameter for each EC2 instance type in the list.
  • D. In the CloudFormation template, create a parameter with the list of EC2 instance types as AllowedValues.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Bibay
Highly Voted 6 months ago
Selected Answer: D
Option D is the correct answer. In the CloudFormation template, the developer should create a parameter with the list of approved EC2 instance types as AllowedValues. This way, users can select the instance type they want to use when launching the CloudFormation stack, but only from the approved list. Option A is not a scalable solution as it requires creating a separate CloudFormation template for each EC2 instance type, which can become cumbersome and difficult to manage as the number of approved instance types grows. Option B is not necessary as creating resources for each EC2 instance type in the list would not enforce the requirement to choose only from the approved list. It would also increase the complexity of the template and make it difficult to manage. Option C is not ideal as it would require creating a separate parameter for each EC2 instance type, which can become difficult to manage as the number of approved instance types grows. Also, it does not enforce the requirement to choose only from the approved list.
upvoted 12 times
jipark
3 months ago
quite much clear explanation !!!
upvoted 1 times
...
...
geekdamsel
Highly Voted 6 months ago
Got this question in exam.Correct answer is D.
upvoted 6 times
...
Pupina
Most Recent 4 months, 1 week ago
Why B instead of C? Each AWS SDK implements retry logic automatically. Most AWS SDKs now support exponential backoff and jitter as part of their retry behavior Then D to increase capacity https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TroubleshootingThrottling.html C&D
upvoted 1 times
Pupina
4 months, 1 week ago
This answer is for question 7 not 6
upvoted 1 times
...
...
NanaDanso
7 months ago
Selected Answer: D
D looks about right
upvoted 4 times
...
prabhay786
7 months, 3 weeks ago
It should be D
upvoted 4 times
...
aragon_saa
7 months, 3 weeks ago
D https://www.examtopics.com/discussions/amazon/view/88788-exam-aws-certified-developer-associate-topic-1-question-343/
upvoted 3 times
...
Question #7 Topic 1

A developer has an application that makes batch requests directly to Amazon DynamoDB by using the BatchGetItem low-level API operation. The responses frequently return values in the UnprocessedKeys element.
Which actions should the developer take to increase the resiliency of the application when the batch response includes values in UnprocessedKeys? (Choose two.)

  • A. Retry the batch operation immediately.
  • B. Retry the batch operation with exponential backoff and randomized delay.
  • C. Update the application to use an AWS software development kit (AWS SDK) to make the requests.
  • D. Increase the provisioned read capacity of the DynamoDB tables that the operation accesses.
  • E. Increase the provisioned write capacity of the DynamoDB tables that the operation accesses.
Reveal Solution Hide Solution

Correct Answer: BD 🗳️

Community vote distribution
BD (51%)
BC (46%)
3%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 3 weeks ago
Selected Answer: BC
B & C https://docs.aws.amazon.com/general/latest/gr/api-retries.html
upvoted 16 times
...
brandon87
Highly Voted 7 months, 1 week ago
Selected Answer: BD
(B) If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed. (D) The most likely cause of a failed read or a failed write is throttling. For BatchGetItem, one or more of the tables in the batch request does not have enough provisioned read capacity to support the operation https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.RetryAndBackoff
upvoted 10 times
...
Abdlhince
Most Recent 5 days, 5 hours ago
Selected Answer: BC
B. This is a good practice to handle throttling errors and avoid overwhelming the server with too many requests at the same time. Exponential backoff means increasing the waiting time between retries exponentially, such as 1 second, 2 seconds, 4 seconds, and so on. Randomized delay means adding some randomness to the waiting time, such as 1.2 seconds, 2.5 seconds, 3.8 seconds, and so on. This can help reduce the chance of collisions and spikes in the network traffic. C.This is a recommended way to interact with DynamoDB, as AWS SDKs provide high-level abstractions and convenience methods for working with DynamoDB. AWS SDKs also handle low-level details such as authentication, retry logic, error handling, and pagination for you.
upvoted 1 times
...
ronn555
5 days, 11 hours ago
BC The question only states that there are UnprocessedKeys. That means that the batch operation occurred correctly most of the time. It states that frequently the batch contains more keys than can be returned with the present RCUs. The does not state that any single key has violated the ProvisionedThroughputExceededException (in which case D would be necessary). So D would only make it more performant because of less Retries. However B and C are examples of resilience
upvoted 1 times
...
Rameez1
2 weeks, 2 days ago
Selected Answer: BC
Option B & C.
upvoted 1 times
...
ashley369534
3 weeks, 5 days ago
B&C https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html first thing first, this question ask for dealing with error. B&C in the doc, error handling has 2 part: 1. Error handling in your application(The AWS SDKs perform their own retries and error checking.) 2.Error retries and exponential backoff (If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed. which is b option) d is irrelevant
upvoted 1 times
...
cai123456
1 month, 1 week ago
between C and B I choose C because of the key work "frequently". using AWS SDK we update the code and do not need to retry frequently.
upvoted 1 times
...
misa27
1 month, 3 weeks ago
Selected Answer: BD
A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items. BatchGetItem returns a partial result if the response size limit is exceeded, the table's provisioned throughput is exceeded, more than 1MB per partition is requested, or an internal processing failure occurs. If a partial result is returned, the operation returns a value for UnprocessedKeys. You can use this value to retry the operation starting with the next item to get. https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchGetItem.html
upvoted 1 times
...
chvtejaswi
1 month, 4 weeks ago
Selected Answer: BD
B and D
upvoted 1 times
...
mrsoa
2 months, 1 week ago
Selected Answer: BD
B D From Stephan's maarek course BatchGetItem • Return items from one or more tables • Up to 100 items, up to 16 MB of data • Items are retrieved in parallel to minimize latency • UnprocessedKeys for failed read operations (exponential backoff or add RCU)
upvoted 4 times
...
love777
2 months, 1 week ago
Selected Answer: BC
B. Retry with Exponential Backoff: When the batch response includes values in UnprocessedKeys, it indicates that some items could not be processed due to limitations like provisioned capacity or system overload. Retry the batch operation with an exponential backoff strategy, which means progressively increasing the time between retries. This helps prevent overwhelming the DynamoDB service and improves the chances of successfully processing the items in subsequent retries. C. Use AWS SDK: AWS SDKs provide built-in retry mechanisms that handle transient errors like UnprocessedKeys. When using an AWS SDK, you don't need to implement the retry logic yourself. The SDK will automatically handle retries with appropriate backoff strategies, making your application more resilient and reducing the burden of error handling.
upvoted 1 times
...
aanataliya
2 months, 2 weeks ago
Selected Answer: BD
B and D is correct answer. AWS SDK automatically takes care of both retry and exponential backoff. If we choose C, selecting only C will answer our question(no need of B) but We need to choose 2 answer. In addition, question doesnot specifically say to change core logic from low level api to SDK. by choosing B and D we can improve resiliency. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.RetryAndBackoff
upvoted 5 times
...
ninomfr64
2 months, 3 weeks ago
Selected Answer: BC
If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed. thus b) and c) as the "AWS SDK implements an exponential backoff algorithm for better flow control" https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.RetryAndBackoff:~:text=each%20AWS%20SDK%20implements%20an%20exponential%20backoff%20algorithm%20for%20better%20flow%20control
upvoted 1 times
...
jipark
3 months ago
Selected Answer: BC
B: for batch, exponential backoff looks answer C: direct to DynamoDB do not recommend
upvoted 1 times
...
KillThemWithKindness
3 months, 3 weeks ago
Selected Answer: BD
C. Using an AWS SDK can simplify making requests and handling responses, but on its own, it does not address the underlying issue of unprocessed keys.
upvoted 2 times
...
awsstark
3 months, 3 weeks ago
Selected Answer: BD
(B) If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed. (D) The most likely cause of a failed read or a failed write is throttling. For BatchGetItem, one or more of the tables in the batch request does not have enough provisioned read capacity to support the operation https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.RetryAndBackoff
upvoted 2 times
...
tttamtttam
3 months, 4 weeks ago
Selected Answer: BC
The hint is it is using the low-level API operation currently. Using AWS SDK, retries and optimization will be done by the SDK.
upvoted 4 times
...
Question #8 Topic 1

A company is running a custom application on a set of on-premises Linux servers that are accessed using Amazon API Gateway. AWS X-Ray tracing has been enabled on the API test stage.
How can a developer enable X-Ray tracing on the on-premises servers with the LEAST amount of configuration?

  • A. Install and run the X-Ray SDK on the on-premises servers to capture and relay the data to the X-Ray service.
  • B. Install and run the X-Ray daemon on the on-premises servers to capture and relay the data to the X-Ray service.
  • C. Capture incoming requests on-premises and configure an AWS Lambda function to pull, process, and relay relevant data to X-Ray using the PutTraceSegments API call.
  • D. Capture incoming requests on-premises and configure an AWS Lambda function to pull, process, and relay relevant data to X-Ray using the PutTelemetryRecords API call.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Ugo_22
3 weeks, 4 days ago
Selected Answer: B
The answer is obviously B.
upvoted 1 times
...
Kowalsky95
1 month, 1 week ago
From doc: The AWS X-Ray daemon is a software application that listens for traffic on UDP port 2000, gathers raw segment data, and relays it to the AWS X-Ray API. The daemon works in conjunction with the AWS X-Ray SDKs and must be running so that data sent by the SDKs can reach the X-Ray service. Running just the daemon won't achieve anything.
upvoted 1 times
...
geekdamsel
6 months ago
Got this question in exam.Correct answer is B.
upvoted 3 times
...
Bibay
6 months ago
Selected Answer: B
. Install and run the X-Ray daemon on the on-premises servers to capture and relay the data to the X-Ray service is the correct option. The X-Ray daemon can be installed and configured on the on-premises servers to capture data and send it to the X-Ray service. This requires minimal configuration and setup. Option A is incorrect because while the X-Ray SDK can be used to capture data on the on-premises servers, it requires more configuration and development effort than the X-Ray daemon. Option C and D are also incorrect because they involve setting up an AWS Lambda function, which is not necessary for enabling X-Ray tracing on the on-premises servers.
upvoted 2 times
...
ihta_2031
7 months, 1 week ago
Selected Answer: B
It's B
upvoted 4 times
...
Untamables
7 months, 3 weeks ago
Selected Answer: B
B https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon.html
upvoted 4 times
...
haaris786
7 months, 3 weeks ago
B: It is Daemon which can be installed for Linux
upvoted 3 times
...
aragon_saa
7 months, 3 weeks ago
B https://www.examtopics.com/discussions/amazon/view/28998-exam-aws-certified-developer-associate-topic-1-question-324/
upvoted 3 times
...
Question #9 Topic 1

A company wants to share information with a third party. The third party has an HTTP API endpoint that the company can use to share the information. The company has the required API key to access the HTTP API.
The company needs a way to manage the API key by using code. The integration of the API key with the application code cannot affect application performance.
Which solution will meet these requirements MOST securely?

  • A. Store the API credentials in AWS Secrets Manager. Retrieve the API credentials at runtime by using the AWS SDK. Use the credentials to make the API call.
  • B. Store the API credentials in a local code variable. Push the code to a secure Git repository. Use the local code variable at runtime to make the API call.
  • C. Store the API credentials as an object in a private Amazon S3 bucket. Restrict access to the S3 object by using IAM policies. Retrieve the API credentials at runtime by using the AWS SDK. Use the credentials to make the API call.
  • D. Store the API credentials in an Amazon DynamoDB table. Restrict access to the table by using resource-based policies. Retrieve the API credentials at runtime by using the AWS SDK. Use the credentials to make the API call.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Kristijan92
Highly Voted 7 months, 2 weeks ago
Selected Answer: A
answer A
upvoted 9 times
...
gullyboy77
Most Recent 1 month ago
Selected Answer: A
Secret Manager is the safest way to store secrets in AWS.
upvoted 1 times
...
chvtejaswi
1 month, 4 weeks ago
Selected Answer: A
Answer A
upvoted 2 times
...
hmdev
2 months, 1 week ago
Selected Answer: A
A seems to be the most secure and correct. Always use Secret Manger to store secrets, as the name implies.
upvoted 1 times
...
Yuxing_Li
2 months, 1 week ago
Selected Answer: A
A is correct
upvoted 1 times
...
sivuca1
2 months, 1 week ago
Selected Answer: A
The other options (B, C and D) are not as safe or manageable:
upvoted 1 times
...
sp323
2 months, 3 weeks ago
Selected Answer: A
parameter store is secure, so A
upvoted 2 times
...
ssoratroi
2 months, 4 weeks ago
Selected Answer: A
parameter store is the better solution so A
upvoted 1 times
...
jayvarma
3 months ago
obviously we are not going to store the API credentials in the local code variables. So option B is ruled out Coming to Option D, It is not wrong to store the API credentials in the DynamoDB table as long as you encrypt them. But, Considering the extent of human error, there is a chance for the DynamoDB to be given too many permissions. As Option A, A secrets manager or a parameter store's primary purpose is to store a secret, It is ideal to use such kind of service to store the API credentials.
upvoted 3 times
...
elfinka9
3 months, 1 week ago
Selected Answer: A
Why B is marked as correct ????
upvoted 4 times
...
Kashan6109
3 months, 1 week ago
Selected Answer: A
Correct answer is A, option B is not secure at all
upvoted 2 times
...
tttamtttam
3 months, 3 weeks ago
Selected Answer: A
Why it is marked as B???????????????
upvoted 3 times
Solovey
3 weeks, 2 days ago
for you to read this comments
upvoted 1 times
...
...
MrPie
4 months ago
It's A, but at least on react native to retrieve secrets from AWS you need the API key so this option doesn't work. You would need to make an HTTP gateway for a lambda function that retrieves the secret.
upvoted 1 times
...
Devon_Fazekas
6 months ago
We all know option A is the most secure and efficient method. Who decided the answer was B?
upvoted 3 times
...
Bibay
6 months ago
Selected Answer: A
The MOST secure solution to manage the API key while ensuring that the integration of the API key with the application code does not affect application performance is to store the API key in AWS Secrets Manager. The API key can be retrieved at runtime by using the AWS SDK, which does not impact application performance. Therefore, option A is the correct answer. Option B is not secure as it exposes the API key to anyone with access to the code repository, which increases the risk of unauthorized access. Option C and D may work, but they require additional configuration and permissions management. Storing the API key in an S3 bucket or a DynamoDB table could expose the key to unauthorized users if proper IAM policies are not in place. Therefore, option A is the most secure and simple solution to manage the API key while not affecting the application's performance.
upvoted 1 times
...
zk1200
6 months, 4 weeks ago
Selected Answer: A
secrets manager seems most likely since it is meant for storing items like API keys.
upvoted 1 times
...
hmmm0101
7 months ago
Selected Answer: A
Answer A
upvoted 4 times
...
Question #10 Topic 1

A developer is deploying a new application to Amazon Elastic Container Service (Amazon ECS). The developer needs to securely store and retrieve different types of variables. These variables include authentication information for a remote API, the URL for the API, and credentials. The authentication information and API URL must be available to all current and future deployed versions of the application across development, testing, and production environments.
How should the developer retrieve the variables with the FEWEST application changes?

  • A. Update the application to retrieve the variables from AWS Systems Manager Parameter Store. Use unique paths in Parameter Store for each variable in each environment. Store the credentials in AWS Secrets Manager in each environment.
  • B. Update the application to retrieve the variables from AWS Key Management Service (AWS KMS). Store the API URL and credentials as unique keys for each environment.
  • C. Update the application to retrieve the variables from an encrypted file that is stored with the application. Store the API URL and credentials in unique files for each environment.
  • D. Update the application to retrieve the variables from each of the deployed environments. Define the authentication information and API URL in the ECS task definition as unique names during the deployment process.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
geekdamsel
Highly Voted 6 months ago
Got this question in exam.Correct answer is A.
upvoted 14 times
...
Warlord_92
Highly Voted 7 months, 3 weeks ago
Selected Answer: A
The application has credentials and URL, so it’s convenient to store them in ssm parameter store restive them.
upvoted 9 times
...
vmintam
Most Recent 1 week ago
i think corrent is A, but why is B ?
upvoted 1 times
...
alihaider907
1 month, 3 weeks ago
I think the wording of option A has a typo first it mentioned " Update the application to retrieve the variables from AWS Systems Manager Parameter Store" then it says "Store the credentials in AWS Secrets Manager in each environment."
upvoted 1 times
...
meetparag81
2 months, 1 week ago
A is correct
upvoted 1 times
...
jayvarma
3 months ago
Option A is correct. The AWS Systems Manager Paramter Store's primary purpose is to secure sensitive information such as API URLs, credentials and the variables that we store in it.
upvoted 2 times
...
Tee400
4 months, 2 weeks ago
Selected Answer: A
AWS Systems Manager Parameter Store is a service that allows you to securely store configuration data such as API URLs, credentials, and other variables. By updating the application to retrieve the variables from Parameter Store, you can separate the configuration from the application code, making it easier to manage and update the variables without modifying the application itself. Storing the credentials in AWS Secrets Manager provides an additional layer of security for sensitive information.
upvoted 2 times
...
MrTee
6 months, 2 weeks ago
Selected Answer: A
his solution allows the developer to securely store and retrieve different types of variables, including authentication information for a remote API, the URL for the API, and credentials.
upvoted 2 times
...
[Removed]
6 months, 2 weeks ago
Selected Answer: A
A; that's what Parameters Store is for.
upvoted 1 times
...
qsergii
6 months, 4 weeks ago
Definitely A
upvoted 1 times
...
fqmark
7 months ago
it should be a, kms is used for encryption: https://aws.amazon.com/kms/
upvoted 3 times
...
prabhay786
7 months, 3 weeks ago
It should be option A
upvoted 2 times
...
Question #11 Topic 1

A company is migrating legacy internal applications to AWS. Leadership wants to rewrite the internal employee directory to use native AWS services. A developer needs to create a solution for storing employee contact details and high-resolution photos for use with the new application.
Which solution will enable the search and retrieval of each employee's individual details and high-resolution photos using AWS APIs?

  • A. Encode each employee's contact information and photos using Base64. Store the information in an Amazon DynamoDB table using a sort key.
  • B. Store each employee's contact information in an Amazon DynamoDB table along with the object keys for the photos stored in Amazon S3.
  • C. Use Amazon Cognito user pools to implement the employee directory in a fully managed software-as-a-service (SaaS) method.
  • D. Store employee contact information in an Amazon RDS DB instance with the photos stored in Amazon Elastic File System (Amazon EFS).
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
hmdev
2 months, 1 week ago
Selected Answer: B
DynamoDB is very fast, secure, and scalable. The S3 is very in-expensive, virtually limitless, and can handle large files. So B is the correct answer.
upvoted 2 times
...
ninomfr64
2 months, 3 weeks ago
Selected Answer: B
A. is not really clear to me, however encoding all info in base64 would make search a bit complex C. does not provide a solution for high resolution image D. EFS does not provide API access to content
upvoted 2 times
...
jayvarma
3 months ago
Option B. As the question says that we have to store high-resolution photos, the solution is to use the S3 here. Because, DynamoDb cannot be used to store anything that is above 400 KB for each object. In this case, we can use DynamoDb to store the contact information of each of the employees and reference the object keys in the table to retrieve the high-resolution images.
upvoted 1 times
...
Bibay
6 months ago
Selected Answer: B
B. Store each employee's contact information in an Amazon DynamoDB table along with the object keys for the photos stored in Amazon S3. Storing each employee's contact information in an Amazon DynamoDB table along with the object keys for the photos stored in Amazon S3 provides a scalable and efficient solution for storing and retrieving employee details and high-resolution photos using AWS APIs. The developer can use the DynamoDB table to query and retrieve employee details, while the S3 bucket can be used to store the high-resolution photos. By using S3, the solution can support large amounts of data while enabling fast retrieval times. The combination of DynamoDB and S3 can provide a cost-effective and scalable solution for storing employee data and photos.
upvoted 4 times
...
ihta_2031
7 months, 1 week ago
Selected Answer: B
Agreed with B
upvoted 4 times
...
aragon_saa
7 months, 3 weeks ago
B https://www.examtopics.com/discussions/amazon/view/88823-exam-aws-certified-developer-associate-topic-1-question-240/
upvoted 4 times
...
Question #12 Topic 1

A developer is creating an application that will give users the ability to store photos from their cellphones in the cloud. The application needs to support tens of thousands of users. The application uses an Amazon API Gateway REST API that is integrated with AWS Lambda functions to process the photos. The application stores details about the photos in Amazon DynamoDB.
Users need to create an account to access the application. In the application, users must be able to upload photos and retrieve previously uploaded photos. The photos will range in size from 300 KB to 5 MB.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Use Amazon Cognito user pools to manage user accounts. Create an Amazon Cognito user pool authorizer in API Gateway to control access to the API. Use the Lambda function to store the photos and details in the DynamoDB table. Retrieve previously uploaded photos directly from the DynamoDB table.
  • B. Use Amazon Cognito user pools to manage user accounts. Create an Amazon Cognito user pool authorizer in API Gateway to control access to the API. Use the Lambda function to store the photos in Amazon S3. Store the object's S3 key as part of the photo details in the DynamoDB table. Retrieve previously uploaded photos by querying DynamoDB for the S3 key.
  • C. Create an IAM user for each user of the application during the sign-up process. Use IAM authentication to access the API Gateway API. Use the Lambda function to store the photos in Amazon S3. Store the object's S3 key as part of the photo details in the DynamoDB table. Retrieve previously uploaded photos by querying DynamoDB for the S3 key.
  • D. Create a users table in DynamoDB. Use the table to manage user accounts. Create a Lambda authorizer that validates user credentials against the users table. Integrate the Lambda authorizer with API Gateway to control access to the API. Use the Lambda function to store the photos in Amazon S3. Store the object's S3 key as par of the photo details in the DynamoDB table. Retrieve previously uploaded photos by querying DynamoDB for the S3 key.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 3 weeks ago
Selected Answer: B
B https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-integrate-with-cognito.html https://aws.amazon.com/blogs/big-data/building-and-maintaining-an-amazon-s3-metadata-index-without-servers/
upvoted 8 times
...
geekdamsel
Highly Voted 6 months ago
Got this question in exam.
upvoted 6 times
...
jayvarma
Most Recent 3 months ago
As it is not a good practice to create a new IAM user for each user that signs up for the application, Option C is ruled out. Amazon Cognito user pools primary purpose is to authenticate and authorize web and mobile applications. As the solution requires the application to store images that are between 300KB and 5MB in size, The idea of storing the images in the DynamoDB is ruled out because the object size in a dynamoDb table cannot exceed 400kb. The ideal solution for this problem would be to store the photos in S3 and store the object's key in the DynamoDB table. So, Option B is the right answer
upvoted 2 times
...
ihta_2031
7 months, 1 week ago
Selected Answer: B
Cognito, Item size in dynamodb is less than this scenario
upvoted 4 times
...
pratchatcap
7 months, 2 weeks ago
Selected Answer: B
B is the most valid solution. A nearest, but invalid, because you cannot store object in Dynamo.
upvoted 3 times
...
Question #13 Topic 1

A company receives food orders from multiple partners. The company has a microservices application that uses Amazon API Gateway APIs with AWS Lambda integration. Each partner sends orders by calling a customized API that is exposed through API Gateway. The API call invokes a shared Lambda function to process the orders.
Partners need to be notified after the Lambda function processes the orders. Each partner must receive updates for only the partner's own orders. The company wants to add new partners in the future with the fewest code changes possible.
Which solution will meet these requirements in the MOST scalable way?

  • A. Create a different Amazon Simple Notification Service (Amazon SNS) topic for each partner. Configure the Lambda function to publish messages for each partner to the partner's SNS topic.
  • B. Create a different Lambda function for each partner. Configure the Lambda function to notify each partner's service endpoint directly.
  • C. Create an Amazon Simple Notification Service (Amazon SNS) topic. Configure the Lambda function to publish messages with specific attributes to the SNS topic. Subscribe each partner to the SNS topic. Apply the appropriate filter policy to the topic subscriptions.
  • D. Create one Amazon Simple Notification Service (Amazon SNS) topic. Subscribe all partners to the SNS topic.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (86%)
14%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 3 weeks ago
Selected Answer: C
C https://docs.aws.amazon.com/sns/latest/dg/sns-message-filtering.html
upvoted 8 times
...
ninomfr64
Most Recent 2 months, 3 weeks ago
Selected Answer: C
C. adding a new partner would only require to create a new subscription with the right filter
upvoted 1 times
...
tttamtttam
3 months, 3 weeks ago
Selected Answer: C
C seems the most efficient way. when you add more partners, you can just assign new codes for each partner. with the codes, you can send notifications to specific paters
upvoted 1 times
...
rlnd2000
3 months, 4 weeks ago
Selected Answer: A
The answer is A since this question has two crucial requirements: a) ... with the fewest code changes possible. b) ...in the MOST scalable way ChatGPT initially gives an incorrect answer and then adjusts its response when requirements are asked.
upvoted 1 times
Skywalker23
1 month, 1 week ago
Cannot be A. It requires change of lambda function code to send notifications to new SNS topics for new partners. Not a scalable solution.
upvoted 1 times
...
rlnd2000
3 months, 4 weeks ago
OOH another important requirement: Each partner must receive updates for only the partner's own orders, that is not achievable with option C
upvoted 1 times
Jeremy11
3 months, 1 week ago
This part of C seems to meet that requirement: Apply the appropriate filter policy to the topic subscriptions.
upvoted 1 times
...
...
...
geekdamsel
6 months ago
Got this question in exam. Correct answer is C.
upvoted 3 times
...
Bibay
6 months ago
Selected Answer: C
Option C is the most scalable way to meet the requirements. This solution allows for a single SNS topic to be used for all partners, which minimizes the need for code changes when adding new partners. By publishing messages with specific attributes to the SNS topic and applying the appropriate filter policy to the topic subscriptions, partners will only receive notifications for their own orders. This approach allows for a more flexible and scalable solution, where new partners can be added to the system with minimal changes to the existing codebase. Option A and D may not be scalable when there are a large number of partners, as creating a separate SNS topic for each partner or subscribing all partners to a single topic may not be feasible. Option B may result in a large number of Lambda functions that need to be managed separately.
upvoted 3 times
...
Rpod
6 months, 2 weeks ago
Selected Answer: C
C is the answer
upvoted 2 times
...
robotgeek
7 months ago
Selected Answer: A
The subscription depends on how the subscriber subcribes to the topic. It would be unsecure to allow customers to notify to whatever they want, they would get messages from other partners. This is more like a traditional queue scenario.
upvoted 2 times
...
grimsdev
7 months ago
Selected Answer: C
C is the best answer. A would work but is less scalable as you have to create new topics for each new partner.
upvoted 2 times
...
TungNNS
7 months, 1 week ago
Selected Answer: C
C is the answer https://docs.aws.amazon.com/sns/latest/dg/sns-message-filtering.html
upvoted 3 times
robotgeek
6 months, 3 weeks ago
So you are allowing Customer A to subscribe to orders from Customer B? sounds like a security fiasco IMHO. Is there any way you as a publisher can limit what Customers can subscribe to which messages with only 1 topic?
upvoted 1 times
...
...
ihta_2031
7 months, 1 week ago
Selected Answer: C
C is the answer. To receive only a subset of the messages, a subscriber must assign a filter policy to the topic subscription.
upvoted 4 times
...
shahs10
7 months, 1 week ago
Selected Answer: A
I think Option A should be the answer where for each partner we should have an SNS topic
upvoted 1 times
...
Question #14 Topic 1

A financial company must store original customer records for 10 years for legal reasons. A complete record contains personally identifiable information (PII). According to local regulations, PII is available to only certain people in the company and must not be shared with third parties. The company needs to make the records available to third-party organizations for statistical analysis without sharing the PII.
A developer wants to store the original immutable record in Amazon S3. Depending on who accesses the S3 document, the document should be returned as is or with all the PII removed. The developer has written an AWS Lambda function to remove the PII from the document. The function is named removePii.
What should the developer do so that the company can meet the PII requirements while maintaining only one copy of the document?

  • A. Set up an S3 event notification that invokes the removePii function when an S3 GET request is made. Call Amazon S3 by using a GET request to access the object without PII.
  • B. Set up an S3 event notification that invokes the removePii function when an S3 PUT request is made. Call Amazon S3 by using a PUT request to access the object without PII.
  • C. Create an S3 Object Lambda access point from the S3 console. Select the removePii function. Use S3 Access Points to access the object without PII.
  • D. Create an S3 access point from the S3 console. Use the access point name to call the GetObjectLegalHold S3 API function. Pass in the removePii function name to access the object without PII.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 3 weeks ago
Selected Answer: C
C https://aws.amazon.com/s3/features/object-lambda/
upvoted 8 times
...
aragon_saa
Highly Voted 7 months, 3 weeks ago
C https://www.examtopics.com/discussions/amazon/view/88229-exam-aws-certified-developer-associate-topic-1-question-174/
upvoted 7 times
...
pagyabeng
Most Recent 5 months, 4 weeks ago
Why is it C?
upvoted 2 times
...
geekdamsel
6 months ago
Got this question in exam.Correct answer is C.
upvoted 2 times
...
Rpod
6 months, 2 weeks ago
Selected Answer: C
C answer
upvoted 1 times
...
ihta_2031
7 months, 1 week ago
Selected Answer: C
It is C
upvoted 3 times
...
Question #15 Topic 1

A developer is deploying an AWS Lambda function The developer wants the ability to return to older versions of the function quickly and seamlessly.
How can the developer achieve this goal with the LEAST operational overhead?

  • A. Use AWS OpsWorks to perform blue/green deployments.
  • B. Use a function alias with different versions.
  • C. Maintain deployment packages for older versions in Amazon S3.
  • D. Use AWS CodePipeline for deployments and rollbacks.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
ubiqinon
5 months, 3 weeks ago
B is the least overhead solution
upvoted 3 times
...
geekdamsel
6 months ago
Got this question in exam.Correct answer is B.
upvoted 2 times
...
zk1200
6 months, 4 weeks ago
Selected Answer: B
I considered D as well which refers to using CodeDeploy. however using codedeploy adds more work. So alias makes more sense.
upvoted 2 times
...
ihta_2031
7 months, 1 week ago
Selected Answer: B
lambda function version => alias
upvoted 4 times
...
Untamables
7 months, 2 weeks ago
Selected Answer: B
B https://docs.aws.amazon.com/lambda/latest/dg/configuration-aliases.html
upvoted 4 times
...
aragon_saa
7 months, 3 weeks ago
B https://www.examtopics.com/discussions/amazon/view/96149-exam-aws-certified-developer-associate-topic-1-question-441/
upvoted 3 times
...
Question #16 Topic 1

A developer has written an AWS Lambda function. The function is CPU-bound. The developer wants to ensure that the function returns responses quickly.
How can the developer improve the function's performance?

  • A. Increase the function's CPU core count.
  • B. Increase the function's memory.
  • C. Increase the function's reserved concurrency.
  • D. Increase the function's timeout.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (96%)
4%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
ihta_2031
Highly Voted 7 months, 1 week ago
Selected Answer: B
Cpu utilisation => increase memory
upvoted 11 times
...
Kashan6109
Most Recent 3 months, 1 week ago
Selected Answer: B
Option B is correct, the only adjustable parameter (in terms of hardware) is lambda memory. Increasing lambda memory will result in automatic adjustment of CPU. Lambda memory is adjustable from 128 MB upto 10 GB
upvoted 4 times
...
Majong
5 months, 1 week ago
Selected Answer: B
Lambda allocates CPU power in proportion to the amount of memory configured. You can read more here: https://docs.aws.amazon.com/lambda/latest/dg/configuration-function-common.html#configuration-memory-console
upvoted 4 times
...
Devon_Fazekas
6 months ago
Increasing the function's CPU core count is not an option in AWS Lambda. AWS Lambda automatically manages the allocation of CPU power and only allows scaling of memory.
upvoted 2 times
...
geekdamsel
6 months ago
Got this question in exam.Correct answer is B.
upvoted 3 times
...
Bibay
6 months ago
Selected Answer: B
. Increase the function's memory. The performance of an AWS Lambda function is primarily determined by the amount of allocated memory. When you increase the memory, you also increase the available CPU and network resources. This can result in faster execution times, especially for CPU-bound functions. Increasing the CPU core count, reserved concurrency, or timeout may not have as significant an impact on performance as increasing memory.
upvoted 1 times
...
blathul
6 months, 2 weeks ago
Selected Answer: B
Adding more memory proportionally increases the amount of CPU, increasing the overall computational power available. If a function is CPU-, network- or memory-bound, then changing the memory setting can dramatically improve its performance. https://docs.aws.amazon.com/lambda/latest/operatorguide/computing-power.html
upvoted 1 times
...
Syre
6 months, 3 weeks ago
Selected Answer: A
On this particular question the answer is A. while increasing memory can indirectly improve CPU performance, it's not always the most effective solution for CPU-bound functions, and increasing the CPU core count is usually a better option for improving performance in such cases. Please note - CPU-Bound functions. This question is to trick you
upvoted 1 times
Majong
5 months, 1 week ago
In this particular question it is B. You are right that in normal question it might be A but for a Lambda function you are not able to change the CPU. Lambda allocates CPU power in proportion to the amount of memory configured. You can read more here: https://docs.aws.amazon.com/lambda/latest/dg/configuration-function-common.html#configuration-memory-console
upvoted 4 times
...
...
Untamables
7 months, 3 weeks ago
Selected Answer: B
B https://docs.aws.amazon.com/lambda/latest/dg/configuration-function-common.html#configuration-memory-console
upvoted 3 times
...
Question #17 Topic 1

For a deployment using AWS Code Deploy, what is the run order of the hooks for in-place deployments?

  • A. BeforeInstall -> ApplicationStop -> ApplicationStart -> AfterInstall
  • B. ApplicationStop -> BeforeInstall -> AfterInstall -> ApplicationStart
  • C. BeforeInstall -> ApplicationStop -> ValidateService -> ApplicationStart
  • D. ApplicationStop -> BeforeInstall -> ValidateService -> ApplicationStart
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
B (82%)
A (18%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
pratchatcap
Highly Voted 7 months, 2 weeks ago
Selected Answer: B
It's B. Check the image in the link. https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#appspec-hooks-server
upvoted 16 times
awsdummie
5 months, 1 week ago
Answer A For InPlace deployment
upvoted 2 times
...
...
quanbui
Most Recent 3 weeks, 6 days ago
ApplicationStop -> BeforeInstall -> AfterInstall -> ApplicationStart -> ValidateService. Ref: https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html
upvoted 1 times
...
Skywalker23
1 month, 1 week ago
Selected Answer: B
Application must be stopped before installation. Otherwise the installation may corrupt the running application’s files and cause damages. Not good.
upvoted 2 times
...
Tony88
2 months ago
Selected Answer: B
Stopped -> Installed -> Started -> Validated Go with B.
upvoted 2 times
...
ninomfr64
2 months, 3 weeks ago
Selected Answer: B
I's B as per doc https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#appspec-hooks-server:~:text=a%20load%20balancer.-,Lifecycle%20event%20hook%20availability,-The%20following%20table
upvoted 1 times
...
sp323
2 months, 3 weeks ago
Application start is after install
upvoted 1 times
...
fcbc62d
3 months, 1 week ago
Selected Answer: B
For in-place deployment B is correct. https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html
upvoted 1 times
...
jipark
3 months, 1 week ago
Selected Answer: B
this image explain all : https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#appspec-hooks-server
upvoted 1 times
...
ScherbakovMike
5 months, 1 week ago
Definitely, B: the order is the same in case of InPlace and Blue/Green deployment: https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#reference-appspec-file-structure-hooks-availability
upvoted 1 times
...
awsdummie
5 months, 1 week ago
Selected Answer: A
Refere the video 18:00 time stamp https://youtu.be/ISttjCIBd6U
upvoted 1 times
...
Nagendhar
6 months ago
Ans: A For an in-place deployment using AWS CodeDeploy, the run order of the hooks is option A, "BeforeInstall -> ApplicationStop -> ApplicationStart -> AfterInstall." This is the correct order of hooks for an in-place deployment, where the deployment package is installed on the same set of Amazon EC2 instances that are running the current version of the application.
upvoted 2 times
...
DeaconStJohn
6 months, 2 weeks ago
Selected Answer: B
I'll go with B based on the link provided by others
upvoted 2 times
...
Syre
6 months, 3 weeks ago
Selected Answer: A
You guys should read the questions carefully. Answer is A. You are confusing the run order of hooks for in-place deployments with the run order of hooks for blue/green deployments. For blue/green deployments, the run order of the hooks is indeed ApplicationStop -> BeforeInstall -> AfterInstall -> ApplicationStart, which matches option B. However, for in-place deployments, the correct run order of the hooks is BeforeInstall -> ApplicationStop -> AfterInstall -> ApplicationStart, as stated in option A.
upvoted 3 times
[Removed]
3 months, 3 weeks ago
BeforeInstall runs after ApplicationStop for ALL deployments types. The correct answer is B
upvoted 1 times
...
DeaconStJohn
6 months, 2 weeks ago
From the below link: https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#appspec-hooks-server Neither type of deployment follows this order. BeforeInstall -> ApplicationStop -> AfterInstall -> ApplicationStart
upvoted 2 times
...
...
brandon87
7 months, 1 week ago
Selected Answer: B
Refer to table. ValidationService is last step in this scenario. https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html
upvoted 3 times
...
March2023
7 months, 2 weeks ago
Selected Answer: A
The answer is A
upvoted 2 times
March2023
7 months, 2 weeks ago
Looks like its B
upvoted 2 times
...
...
svrnvtr
7 months, 2 weeks ago
Selected Answer: B
B is correct answer
upvoted 3 times
...
prabhay786
7 months, 3 weeks ago
It should B
upvoted 1 times
...
Question #18 Topic 1

A company is building a serverless application on AWS. The application uses an AWS Lambda function to process customer orders 24 hours a day, 7 days a week. The Lambda function calls an external vendor's HTTP API to process payments.
During load tests, a developer discovers that the external vendor payment processing API occasionally times out and returns errors. The company expects that some payment processing API calls will return errors.
The company wants the support team to receive notifications in near real time only when the payment processing external API error rate exceed 5% of the total number of transactions in an hour. Developers need to use an existing Amazon Simple Notification Service (Amazon SNS) topic that is configured to notify the support team.
Which solution will meet these requirements?

  • A. Write the results of payment processing API calls to Amazon CloudWatch. Use Amazon CloudWatch Logs Insights to query the CloudWatch logs. Schedule the Lambda function to check the CloudWatch logs and notify the existing SNS topic.
  • B. Publish custom metrics to CloudWatch that record the failures of the external payment processing API calls. Configure a CloudWatch alarm to notify the existing SNS topic when error rate exceeds the specified rate.
  • C. Publish the results of the external payment processing API calls to a new Amazon SNS topic. Subscribe the support team members to the new SNS topic.
  • D. Write the results of the external payment processing API calls to Amazon S3. Schedule an Amazon Athena query to run at regular intervals. Configure Athena to send notifications to the existing SNS topic when the error rate exceeds the specified rate.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Bibay
Highly Voted 6 months ago
Selected Answer: B
B. Publish custom metrics to CloudWatch that record the failures of the external payment processing API calls. Configure a CloudWatch alarm to notify the existing SNS topic when the error rate exceeds the specified rate is the best solution to meet the requirements. With CloudWatch custom metrics, developers can publish and monitor custom data points, including the number of failed requests to the external payment processing API. A CloudWatch alarm can be configured to notify an SNS topic when the error rate exceeds the specified rate, allowing the support team to be notified in near real-time. Option A is not optimal since it involves scheduling a Lambda function to check the CloudWatch logs. Option C may not provide the desired functionality since it does not specify a rate at which to notify the support team. Option D is more complex than necessary, as it involves writing the results to S3 and configuring an Athena query to send notifications to an SNS topic.
upvoted 8 times
...
Tony88
Most Recent 2 months ago
Selected Answer: B
Require "near real-time" notification, so you should not use scheduled solution. Creating a new SNS topic is no sense.
upvoted 2 times
Ponyi
2 days, 23 hours ago
In the question, it is also mentioned that "Developer needs to use the existing SNS topic...."
upvoted 1 times
...
...
jayvarma
2 months, 4 weeks ago
Option B. Using custom metrics, Developers will be able to publish and monitor custom data points such as the no. of failed requests to the external payment processing API. Create a CloudWatch alarm and configure it to be triggered when the rate of error exceeds the specified number in the question.
upvoted 1 times
...
svrnvtr
7 months, 2 weeks ago
Selected Answer: B
It is B
upvoted 3 times
...
Untamables
7 months, 2 weeks ago
Selected Answer: B
The correct answer is B. You can use the Embedded Metrics format to embed custom metrics alongside detailed log event data. CloudWatch automatically extracts the custom metrics so you can visualize and alarm on them, for real-time incident detection. https://docs.aws.amazon.com/lambda/latest/operatorguide/custom-metrics.html
upvoted 3 times
...
Question #19 Topic 1

A company is offering APIs as a service over the internet to provide unauthenticated read access to statistical information that is updated daily. The company uses Amazon API Gateway and AWS Lambda to develop the APIs. The service has become popular, and the company wants to enhance the responsiveness of the APIs.
Which action can help the company achieve this goal?

  • A. Enable API caching in API Gateway.
  • B. Configure API Gateway to use an interface VPC endpoint.
  • C. Enable cross-origin resource sharing (CORS) for the APIs.
  • D. Configure usage plans and API keys in API Gateway.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Bibay
Highly Voted 6 months ago
Selected Answer: A
A. Enable API caching in API Gateway can help the company enhance the responsiveness of the APIs. By enabling caching, API Gateway stores the responses from the API and returns them for subsequent requests instead of forwarding the requests to Lambda. This reduces the number of requests to Lambda, improves API performance, and reduces latency for users.
upvoted 10 times
Pupina
4 months, 1 week ago
I agree
upvoted 1 times
...
yashika2005
5 months, 1 week ago
thanks a ton for all your explanations in every answer! Really appreciate it! Very helpful!
upvoted 1 times
...
...
zoro_chi
Most Recent 1 month, 1 week ago
can someone please share pdf file with me at jagbetuyi001@gmail.com. I have my exam next week. Thanks in advance beautiful people.
upvoted 1 times
...
Tony88
2 months ago
Selected Answer: A
Go with A. A. Caching is the general solution to improve performance of non-frequently change data. (in this case, daily, not really frequent) B. interface endpoint is a VPC concept, in this architect we don't need to concern with VPC. For those who are interested, go check with interface endpoint and gateway endpoint. C. CORS is short for cross origin resource share. it is a distractor here. You may consider CORS when your client cannot access to your API Gateway resource, not when you want to improve the performance. D. usage plan is used when your API client's behaviour is predictable, and it can avoid anormal usage.
upvoted 2 times
...
yuruyenucakc
2 months, 2 weeks ago
A-> Caching frequently accessed api calls allows reducing process time every time api is called. B-> You shloud configure VPC if you want to change network security of your application. So it does not neccessarily increase the performance. C-> CORS (Cross Origin Resource Sharing), allows you to proccess the api calls that comes from outside of your AWS organization. Again nothing to do with the performance. One of the use case of this feature is if you want to keep your web app apis reachable from public internet you should enable CORS for it. D→ This is mainly for throttling and controlling who can access the API and at what rate. While it's useful for controlling and metering access, it doesn't enhance the responsiveness of the API
upvoted 1 times
...
svrnvtr
7 months, 2 weeks ago
Selected Answer: A
I vote for A
upvoted 3 times
...
Untamables
7 months, 2 weeks ago
Selected Answer: A
A https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-caching.html
upvoted 3 times
...
Question #20 Topic 1

A developer wants to store information about movies. Each movie has a title, release year, and genre. The movie information also can include additional properties about the cast and production crew. This additional information is inconsistent across movies. For example, one movie might have an assistant director, and another movie might have an animal trainer.
The developer needs to implement a solution to support the following use cases:
For a given title and release year, get all details about the movie that has that title and release year.
For a given title, get all details about all movies that have that title.
For a given genre, get all details about all movies in that genre.
Which data store configuration will meet these requirements?

  • A. Create an Amazon DynamoDB table. Configure the table with a primary key that consists of the title as the partition key and the release year as the sort key. Create a global secondary index that uses the genre as the partition key and the title as the sort key.
  • B. Create an Amazon DynamoDB table. Configure the table with a primary key that consists of the genre as the partition key and the release year as the sort key. Create a global secondary index that uses the title as the partition key.
  • C. On an Amazon RDS DB instance, create a table that contains columns for title, release year, and genre. Configure the title as the primary key.
  • D. On an Amazon RDS DB instance, create a table where the primary key is the title and all other data is encoded into JSON format as one additional column.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Bibay
Highly Voted 6 months ago
Selected Answer: A
A. Create an Amazon DynamoDB table. Configure the table with a primary key that consists of the title as the partition key and the release year as the sort key. Create a global secondary index that uses the genre as the partition key and the title as the sort key. This option is the best choice for the given requirements. By using DynamoDB, the developer can store the movie information in a flexible and scalable NoSQL database. The primary key can be set to the title and release year, allowing for efficient retrieval of information about a specific movie. The global secondary index can be created using the genre as the partition key, allowing for efficient retrieval of information about all movies in a specific genre. Additionally, the use of a NoSQL database like DynamoDB allows for the flexible storage of additional properties about the cast and crew, as each movie can have different properties without affecting the structure of the database.
upvoted 7 times
...
Tony88
Most Recent 2 months ago
Selected Answer: A
Go with A. NoSQL is good when data attributes are inconsistent -> DynamoDB Primary key should be unique, go with title + release year.
upvoted 2 times
...
jayvarma
2 months, 4 weeks ago
As the schema for each entry of data into the database is not the same all the time, We would require a NoSQL database. So, RDS DB instance is ruled out. The answer is between A and B. As we would need the partition key to be as unique as possible, we would like to have the title of the movie as the partition key. Because having the partition key as the genre will create a hot partition problem and our data stored in the DynamoDB will be skewed. So option A is the answer.
upvoted 3 times
...
Krok
7 months ago
Selected Answer: A
It's A - I totally agree. It's a single appropriate solution. But in my opinion genre isn't a quite good option as GSI partition key - it isn't high distribution and we can get a hot partition.
upvoted 2 times
...
shahs10
7 months, 1 week ago
Selected Answer: A
Option A because we have to search on the basis of title so it is better to partition by title. Also we have to search by genre so it is good option to make GSI using genre as partition key
upvoted 2 times
...
Untamables
7 months, 2 weeks ago
Selected Answer: A
The correct answer is A. Amazon DynamoDB is suited for storing inconsistent attributes data across items. Option B is wrong. This solution does not help get items with the condition of the combination, title and release year.
upvoted 3 times
...
Question #21 Topic 1

A developer maintains an Amazon API Gateway REST API. Customers use the API through a frontend UI and Amazon Cognito authentication.
The developer has a new version of the API that contains new endpoints and backward-incompatible interface changes. The developer needs to provide beta access to other developers on the team without affecting customers.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Define a development stage on the API Gateway API. Instruct the other developers to point the endpoints to the development stage.
  • B. Define a new API Gateway API that points to the new API application code. Instruct the other developers to point the endpoints to the new API.
  • C. Implement a query parameter in the API application code that determines which code version to call.
  • D. Specify new API Gateway endpoints for the API endpoints that the developer wants to add.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Bibay
Highly Voted 6 months ago
Selected Answer: A
Option A is the correct solution to meet the requirements with the least operational overhead. Defining a development stage on the API Gateway API enables other developers to test the new version of the API without affecting the production environment. This approach allows the developers to work on the new version of the API independently and avoid conflicts with the production environment. The other options involve creating a new API or new endpoints, which could introduce additional operational overhead, such as managing multiple APIs or endpoints, configuring access control, and updating the frontend UI to point to the new endpoints or API. Option C also introduces additional complexity by requiring the implementation of a query parameter to determine which code version to call.
upvoted 6 times
...
Tony88
Most Recent 2 months ago
Selected Answer: A
The best practice is to define a development stage.
upvoted 2 times
...
jayvarma
2 months, 4 weeks ago
Option A is the right answer. Defining a development stage on the API Gateway API would provide other developers with a way to test the newer version of the API without affecting prod. The rest of the options would create a lot of operational overhead.
upvoted 1 times
...
MrTee
6 months, 2 weeks ago
Selected Answer: A
The developer should define a development stage on the API Gateway API. They should then instruct the other developers to point the endpoints to the development stage. This solution will meet the requirements with the least operational overhead
upvoted 1 times
...
Untamables
7 months, 2 weeks ago
Selected Answer: A
A https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-stages.html https://docs.aws.amazon.com/apigateway/latest/developerguide/canary-release.html
upvoted 3 times
...
aragon_saa
7 months, 3 weeks ago
A https://www.examtopics.com/discussions/amazon/view/88872-exam-aws-certified-developer-associate-topic-1-question-318/
upvoted 3 times
...
Question #22 Topic 1

A developer is creating an application that will store personal health information (PHI). The PHI needs to be encrypted at all times. An encrypted Amazon RDS for MySQL DB instance is storing the data. The developer wants to increase the performance of the application by caching frequently accessed data while adding the ability to sort or rank the cached datasets.
Which solution will meet these requirements?

  • A. Create an Amazon ElastiCache for Redis instance. Enable encryption of data in transit and at rest. Store frequently accessed data in the cache.
  • B. Create an Amazon ElastiCache for Memcached instance. Enable encryption of data in transit and at rest. Store frequently accessed data in the cache.
  • C. Create an Amazon RDS for MySQL read replica. Connect to the read replica by using SSL. Configure the read replica to store frequently accessed data.
  • D. Create an Amazon DynamoDB table and a DynamoDB Accelerator (DAX) cluster for the table. Store frequently accessed data in the DynamoDB table.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: A
A You can use Amazon Elasticache for Redis Sorted Sets to easily implement a dashboard that keeps a list of sorted data by their rank. https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/elasticache-use-cases.html#elasticache-for-redis-use-cases-gaming https://aws.amazon.com/elasticache/redis-vs-memcached/
upvoted 10 times
jipark
3 months ago
in sum, REDIS featured encryption, PCI-DSS MemCache support AutoDiscovery
upvoted 2 times
...
...
Bibay
Highly Voted 6 months ago
Selected Answer: A
To meet the requirements of caching frequently accessed data while adding the ability to sort or rank cached datasets, a developer should choose Amazon ElastiCache for Redis. ElastiCache is a web service that provides an in-memory data store in the cloud, and it supports both Memcached and Redis engines. While both engines are suitable for caching frequently accessed data, Redis is a better choice for this use case because it provides sorted sets and other data structures that allow for sorting and ranking of cached datasets. The data in ElastiCache can be encrypted at rest and in transit, ensuring the security of the PHI. Therefore, option A is the correct answer.
upvoted 6 times
...
nmc12
Most Recent 1 month, 1 week ago
Redis: Supports various data structures such as strings, hashes, lists, sets, sorted sets, bitmaps, hyperloglogs, and geospatial indexes. Memcached: Primarily supports string-based keys and values; does not support advanced data structures.
upvoted 2 times
...
brandon87
7 months, 1 week ago
Selected Answer: A
ElastiCache for Redis also features Online Cluster Resizing, supports encryption, and is HIPAA eligible and PCI DSS compliant. https://aws.amazon.com/elasticache/redis-vs-memcached/
upvoted 5 times
...
Question #23 Topic 1

A company has a multi-node Windows legacy application that runs on premises. The application uses a network shared folder as a centralized configuration repository to store configuration files in .xml format. The company is migrating the application to Amazon EC2 instances. As part of the migration to AWS, a developer must identify a solution that provides high availability for the repository.
Which solution will meet this requirement MOST cost-effectively?

  • A. Mount an Amazon Elastic Block Store (Amazon EBS) volume onto one of the EC2 instances. Deploy a file system on the EBS volume. Use the host operating system to share a folder. Update the application code to read and write configuration files from the shared folder.
  • B. Deploy a micro EC2 instance with an instance store volume. Use the host operating system to share a folder. Update the application code to read and write configuration files from the shared folder.
  • C. Create an Amazon S3 bucket to host the repository. Migrate the existing .xml files to the S3 bucket. Update the application code to use the AWS SDK to read and write configuration files from Amazon S3.
  • D. Create an Amazon S3 bucket to host the repository. Migrate the existing .xml files to the S3 bucket. Mount the S3 bucket to the EC2 instances as a local volume. Update the application code to read and write configuration files from the disk.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (83%)
D (17%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
shahs10
Highly Voted 7 months, 1 week ago
Why is not there EFS to replace shared file system
upvoted 8 times
nmc12
1 month, 1 week ago
it is best solution. But we can use S3 without EFS
upvoted 1 times
...
...
Bibay
Highly Voted 5 months, 3 weeks ago
c Option C is the most cost-effective solution to provide high availability for the centralized configuration repository. Amazon S3 provides a highly durable and available object storage service. S3 stores objects redundantly across multiple devices and multiple facilities within a region, making it highly available. The developer can migrate the existing .xml files to an S3 bucket and update the application code to use the AWS SDK to read and write configuration files from Amazon S3. Option A and B are not the best solutions as they require the developer to use the host operating system to share a folder, which can lead to a single point of failure. Option D is not a recommended solution as it is not a direct way of accessing an S3 bucket. While it is possible to use third-party tools to mount an S3 bucket as a local disk, it can lead to performance issues and additional complexity.
upvoted 5 times
...
HanTran0795
Most Recent 3 weeks, 1 day ago
Selected Answer: D
It is a Windows legacy application. What if the sdk doesn't support the app? I choose D.
upvoted 1 times
ronn555
2 days, 17 hours ago
C S3 Buckets can only be mounted directly to Linux EC2 instances
upvoted 1 times
...
...
AhmedAliHashmi
2 months, 1 week ago
Correct answer is C
upvoted 1 times
...
senadevtrd
5 months, 1 week ago
Selected Answer: C
In theses options, this is more correct
upvoted 1 times
...
Untamables
7 months, 2 weeks ago
Selected Answer: C
C https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/AmazonS3.html https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingAWSSDK.html
upvoted 4 times
...
aragon_saa
7 months, 3 weeks ago
C https://www.examtopics.com/discussions/amazon/view/88701-exam-aws-certified-developer-associate-topic-1-question-227/
upvoted 4 times
...
Question #24 Topic 1

A company wants to deploy and maintain static websites on AWS. Each website's source code is hosted in one of several version control systems, including AWS CodeCommit, Bitbucket, and GitHub.
The company wants to implement phased releases by using development, staging, user acceptance testing, and production environments in the AWS Cloud. Deployments to each environment must be started by code merges on the relevant Git branch. The company wants to use HTTPS for all data exchange. The company needs a solution that does not require servers to run continuously.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Host each website by using AWS Amplify with a serverless backend. Conned the repository branches that correspond to each of the desired environments. Start deployments by merging code changes to a desired branch.
  • B. Host each website in AWS Elastic Beanstalk with multiple environments. Use the EB CLI to link each repository branch. Integrate AWS CodePipeline to automate deployments from version control code merges.
  • C. Host each website in different Amazon S3 buckets for each environment. Configure AWS CodePipeline to pull source code from version control. Add an AWS CodeBuild stage to copy source code to Amazon S3.
  • D. Host each website on its own Amazon EC2 instance. Write a custom deployment script to bundle each website's static assets. Copy the assets to Amazon EC2. Set up a workflow to run the script when code is merged.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (88%)
12%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: A
The correct answer is A. AWS Amplify is an all in one service for the requirement. https://docs.aws.amazon.com/amplify/latest/userguide/welcome.html Option C is almost correct, but it does not mention how to implement HTTPS. Option B and D are wrong. They need to keep running servers.
upvoted 14 times
...
Bibay
Highly Voted 5 months, 3 weeks ago
a The solution that will meet these requirements with the LEAST operational overhead is option A: Host each website by using AWS Amplify with a serverless backend. AWS Amplify is a fully managed service that allows developers to build and deploy web applications and static websites. With Amplify, developers can easily connect their repositories, such as AWS CodeCommit, Bitbucket, and GitHub, to automatically build and deploy changes to the website based on code merges. Amplify also supports phased releases with multiple environments, including development, staging, user acceptance testing, and production, which can be linked to specific branches in the repository. Additionally, Amplify uses HTTPS for all data exchange by default and has a serverless backend, which means there are no servers to maintain. Overall, this solution provides the least operational overhead while meeting all the specified requirements.
upvoted 10 times
yashika2005
5 months, 1 week ago
thanks a ton for all the explanations!
upvoted 1 times
...
...
Cerakoted
Most Recent 3 weeks, 4 days ago
Selected Answer: A
Check About AWS Amplify Hosting
upvoted 1 times
...
jayvarma
2 months, 4 weeks ago
Option A is the answer. Ofcourse, until now we have been used to the fact that we need to use S3 for static website hosting. But there are a lot of requirements described in the question like the source code hosting, phased releases with different environments and HTTPS for all data exchange (which is not possible with S3 Hosting). AWS Amplify does all of this for you with the least operational overhead.
upvoted 3 times
...
Devon_Fazekas
6 months ago
For fellow ACloudGurus, I was taught to associate static website hosting to S3 buckets. But apparently, "least operational overhead" is achieved using Amplify, as it natively supports deployment to various environments and seamlessly integrates with version control systems. Whereas, S3 requires configuring multiple buckets, configuring CodePipeline and integrating with each bucket.
upvoted 3 times
...
Rpod
6 months, 2 weeks ago
Selected Answer: C
Static Website should be C ..using S3
upvoted 2 times
Arnaud92
5 months, 3 weeks ago
Sadly Static Web Hosting on S3 does not supports HTTPS . So Response is A ;-) https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html
upvoted 5 times
jipark
3 months ago
that is critical key !! thanks a lot.
upvoted 2 times
...
...
...
Question #25 Topic 1

A company is migrating an on-premises database to Amazon RDS for MySQL. The company has read-heavy workloads. The company wants to refactor the code to achieve optimum read performance for queries.
Which solution will meet this requirement with LEAST current and future effort?

  • A. Use a multi-AZ Amazon RDS deployment. Increase the number of connections that the code makes to the database or increase the connection pool size if a connection pool is in use.
  • B. Use a multi-AZ Amazon RDS deployment. Modify the code so that queries access the secondary RDS instance.
  • C. Deploy Amazon RDS with one or more read replicas. Modify the application code so that queries use the URL for the read replicas.
  • D. Use open source replication software to create a copy of the MySQL database on an Amazon EC2 instance. Modify the application code so that queries use the IP address of the EC2 instance.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Skywalker23
1 month, 1 week ago
Selected Answer: C
Read heavy access need read replicas as the right solution.
upvoted 3 times
...
Tony88
2 months ago
Selected Answer: C
Keyword: heavy read
upvoted 2 times
...
Akash619
2 months, 2 weeks ago
Selected Answer: C
Read Replicas for high performance read operations
upvoted 2 times
...
jayvarma
2 months, 4 weeks ago
Keyword: Achieve Optimum read performance for queries. Answer: Use Read Replicas and use that specific URL for read queries.
upvoted 1 times
...
Devon_Fazekas
6 months ago
Selected Answer: C
Multi-AZ is for disaster recovery, not read scalability or performance.
upvoted 3 times
...
Malkia
6 months ago
Selected Answer: C
C answer
upvoted 1 times
...
Rpod
6 months, 2 weeks ago
Selected Answer: C
C answer
upvoted 3 times
...
Krok
7 months ago
Selected Answer: C
It's C.
upvoted 2 times
...
Dun6
7 months, 2 weeks ago
Selected Answer: C
Heavy reads, use read replica
upvoted 3 times
...
Untamables
7 months, 2 weeks ago
Selected Answer: C
C https://aws.amazon.com/rds/features/read-replicas/
upvoted 3 times
...
March2023
7 months, 2 weeks ago
Selected Answer: C
It is C
upvoted 2 times
...
Ajaykumarlp
7 months, 2 weeks ago
It is C
upvoted 2 times
...
svrnvtr
7 months, 2 weeks ago
Selected Answer: C
Seems like it is C
upvoted 2 times
...
Question #26 Topic 1

A developer is creating an application that will be deployed on IoT devices. The application will send data to a RESTful API that is deployed as an AWS Lambda function. The application will assign each API request a unique identifier. The volume of API requests from the application can randomly increase at any given time of day.
During periods of request throttling, the application might need to retry requests. The API must be able to handle duplicate requests without inconsistencies or data loss.
Which solution will meet these requirements?

  • A. Create an Amazon RDS for MySQL DB instance. Store the unique identifier for each request in a database table. Modify the Lambda function to check the table for the identifier before processing the request.
  • B. Create an Amazon DynamoDB table. Store the unique identifier for each request in the table. Modify the Lambda function to check the table for the identifier before processing the request.
  • C. Create an Amazon DynamoDB table. Store the unique identifier for each request in the table. Modify the Lambda function to return a client error response when the function receives a duplicate request.
  • D. Create an Amazon ElastiCache for Memcached instance. Store the unique identifier for each request in the cache. Modify the Lambda function to check the cache for the identifier before processing the request.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Devon_Fazekas
Highly Voted 6 months ago
Selected Answer: B
I originally thought ElastiCache would provide the sufficient session management of the unique identifiers with the least latency. But apparently, the scope of this question revolves around durability, not latency. Hence, a persistent storage is better suited. And while RDS is a viable solution for durability and performance, the question specifies IoT devices which typically produce unstructured data that is better handled by No-SQL services like DynamoDB.
upvoted 17 times
...
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: B
B The resolution is to make the Lambda function idempotent. https://repost.aws/knowledge-center/lambda-function-idempotent https://aws.amazon.com/builders-library/making-retries-safe-with-idempotent-APIs/
upvoted 7 times
...
Tony88
Most Recent 2 months ago
Selected Answer: B
Cache topic. So Elastic Redis and DynamoDB both can be used as a cache solution. If you want high performance, low latency, go with Redis If you want persistent storage, go with DyanmoDB.
upvoted 3 times
...
Question #27 Topic 1

A developer wants to expand an application to run in multiple AWS Regions. The developer wants to copy Amazon Machine Images (AMIs) with the latest changes and create a new application stack in the destination Region. According to company requirements, all AMIs must be encrypted in all Regions. However, not all the AMIs that the company uses are encrypted.
How can the developer expand the application to run in the destination Region while meeting the encryption requirement?

  • A. Create new AMIs, and specify encryption parameters. Copy the encrypted AMIs to the destination Region. Delete the unencrypted AMIs.
  • B. Use AWS Key Management Service (AWS KMS) to enable encryption on the unencrypted AMIs. Copy the encrypted AMIs to the destination Region.
  • C. Use AWS Certificate Manager (ACM) to enable encryption on the unencrypted AMIs. Copy the encrypted AMIs to the destination Region.
  • D. Copy the unencrypted AMIs to the destination Region. Enable encryption by default in the destination Region.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
A (62%)
B (38%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Bibay
Highly Voted 5 months, 3 weeks ago
A. Create new AMIs, and specify encryption parameters. Copy the encrypted AMIs to the destination Region. Delete the unencrypted AMIs. The best solution for meeting the encryption requirement is to create new AMIs with encryption enabled and copy them to the destination Region. By default, when an AMI is copied to another Region, it is not encrypted in the destination Region even if it is encrypted in the source Region. Therefore, the developer must create new encrypted AMIs that can be used in the destination Region. Once the new encrypted AMIs have been created, they can be copied to the destination Region. The unencrypted AMIs can then be deleted to ensure that all instances running in all Regions are using only encrypted AMIs.
upvoted 10 times
...
anhike
Highly Voted 7 months, 2 weeks ago
Selected Answer: A
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIEncryption.html Encrypt an unencrypted image during copy In this scenario, an AMI backed by an unencrypted root snapshot is copied to an AMI with an encrypted root snapshot. The CopyImage action is invoked with two encryption parameters, including a customer managed key. A is the only logical answer.
upvoted 5 times
...
ronn555
Most Recent 2 days, 16 hours ago
A When you create an encrypted AMI and do not specify the KMS key, AWS will use the default Customer Managed Key which is the only multi-region key. If you select a KMS key from the origin region it will not work in the destination region (presently) so B is not correct.
upvoted 1 times
...
Rameez1
3 weeks, 4 days ago
Selected Answer: A
A is correct. Unencrypted AMI can't be encrypted after creation. Need to create new encrypted AMI then it can be copied to other regions.
upvoted 4 times
...
Cerakoted
3 weeks, 4 days ago
Selected Answer: B
Answer is B check this link https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html#ami-copy-encryption
upvoted 2 times
...
manikantaJ
4 weeks, 1 day ago
Selected Answer: B
Here's why option B is the appropriate choice: AWS KMS Encryption: AWS KMS is a service that allows you to easily enable encryption for your resources, including Amazon Machine Images (AMIs). You can create a customer managed key (CMK) in AWS KMS and use it to encrypt your AMIs. Enable Encryption on Unencrypted AMIs: You can enable encryption for unencrypted AMIs by creating a copy of the AMI and specifying the AWS KMS key to use for encryption during the copy process. This ensures that your new AMIs in the destination Region are encrypted. Maintain Data Integrity: This approach allows you to maintain data integrity and ensure that all AMIs are encrypted in compliance with company requirements.
upvoted 2 times
...
sofiatian
1 month, 2 weeks ago
Selected Answer: B
Copy an unencrypted source AMI to an encrypted target AMI In this scenario, an AMI backed by an unencrypted root snapshot is copied to an AMI with an encrypted root snapshot. The CopyImage action is invoked with two encryption parameters, including a customer managed key. As a result, the encryption status of the root snapshot changes, so that the target AMI is backed by a root snapshot containing the same data as the source snapshot, but encrypted using the specified key. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html
upvoted 2 times
...
Ap1011
2 months, 1 week ago
Answer A For any AMI copy to be encrypted the source AMI should be Encrypted first , You cant encrypt the copy of the AMI if the source Is not Encrypted
upvoted 3 times
...
Naj_64
2 months, 2 weeks ago
Selected Answer: B
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIEncryption.html#AMI-encryption-copy "Copy-image behaviors with both Encrypted and KmsKeyId set: An unencrypted snapshot is copied to a snapshot encrypted by the specified KMS key."
upvoted 2 times
Naj_64
2 months, 2 weeks ago
B is wrong. Going with A You just cant use KMS to encrypt and unencrypted snapshot, you'll need to first create a vol from the snapshot and select the option to encrypt it. Making A the correct answer.
upvoted 2 times
...
...
sanjoysarkar
7 months ago
A. Is the correct answer.
upvoted 1 times
...
Krok
7 months ago
Selected Answer: A
I think it's A. Option D is also correct, but in this case, your source AMI stay unencrypted. Options B and C - are incorrect, you can't just encrypt existing unencrypted AMI or create encrypted AMI from unencrypted EC2.
upvoted 2 times
...
5aga
7 months, 1 week ago
Selected Answer: A
read the question carefully. yes, we can use kms to encrypt ami and use in multiple regions. but you cannot direct applying kms encryption on non encrypted AMI. Answer B is wrong.
upvoted 4 times
...
March2023
7 months, 2 weeks ago
Selected Answer: B
My vote is B
upvoted 1 times
...
srikanth923
7 months, 2 weeks ago
Selected Answer: A
you cannot encrypt an existing unencrypted AMI. you need to create an ami with encryption enabled and change its region, so answer is B
upvoted 3 times
srikanth923
7 months, 2 weeks ago
I mean A
upvoted 4 times
March2023
7 months, 2 weeks ago
Just looked this up you're right. A is the only logical answer.
upvoted 1 times
...
...
...
Untamables
7 months, 2 weeks ago
Selected Answer: B
B https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIEncryption.html Option D is wrong. You must enable the default encryption before copying the unencrypted AMIs.
upvoted 2 times
...
aragon_saa
7 months, 3 weeks ago
B https://www.examtopics.com/discussions/amazon/view/88812-exam-aws-certified-developer-associate-topic-1-question-266/
upvoted 2 times
...
Question #28 Topic 1

A company hosts a client-side web application for one of its subsidiaries on Amazon S3. The web application can be accessed through Amazon CloudFront from https://www.example.com. After a successful rollout, the company wants to host three more client-side web applications for its remaining subsidiaries on three separate S3 buckets.
To achieve this goal, a developer moves all the common JavaScript files and web fonts to a central S3 bucket that serves the web applications. However, during testing, the developer notices that the browser blocks the JavaScript files and web fonts.
What should the developer do to prevent the browser from blocking the JavaScript files and web fonts?

  • A. Create four access points that allow access to the central S3 bucket. Assign an access point to each web application bucket.
  • B. Create a bucket policy that allows access to the central S3 bucket. Attach the bucket policy to the central S3 bucket
  • C. Create a cross-origin resource sharing (CORS) configuration that allows access to the central S3 bucket. Add the CORS configuration to the central S3 bucket.
  • D. Create a Content-MD5 header that provides a message integrity check for the central S3 bucket. Insert the Content-MD5 header for each web application request.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: C
C This is a frequent trouble. Web applications cannot access the resources in other domains by default, except some exceptions. You must configure CORS on the resources to be accessed. https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors.html
upvoted 6 times
...
svrnvtr
Most Recent 7 months, 2 weeks ago
Selected Answer: C
It is C
upvoted 3 times
...
aragon_saa
7 months, 3 weeks ago
C https://www.examtopics.com/discussions/amazon/view/88856-exam-aws-certified-developer-associate-topic-1-question-302/
upvoted 3 times
...
Question #29 Topic 1

An application is processing clickstream data using Amazon Kinesis. The clickstream data feed into Kinesis experiences periodic spikes. The PutRecords API call occasionally fails and the logs show that the failed call returns the response shown below:

Which techniques will help mitigate this exception? (Choose two.)

  • A. Implement retries with exponential backoff.
  • B. Use a PutRecord API instead of PutRecords.
  • C. Reduce the frequency and/or size of the requests.
  • D. Use Amazon SNS instead of Kinesis.
  • E. Reduce the number of KCL consumers.
Reveal Solution Hide Solution

Correct Answer: AC 🗳️

Community vote distribution
AC (75%)
BC (25%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
eboehm2
4 months, 3 weeks ago
Selected Answer: AC
100% AC as per AWS : ProvisionedThroughputExceededException The request rate for the stream is too high, or the requested data is too large for the available throughput. Reduce the frequency or size of your requests. For more information, see Streams Limits in the Amazon Kinesis Data Streams Developer Guide, and Error Retries and Exponential Backoff in AWS in the AWS General Reference. https://docs.aws.amazon.com/kinesis/latest/APIReference/API_PutRecords.html
upvoted 4 times
...
Baba_Eni
5 months ago
Selected Answer: AC
AC is the best answer. When there is throttling, it is best practise to implement retries with exponential backoff.
upvoted 1 times
...
ezredame
5 months, 1 week ago
Selected Answer: BC
I think this is really tricky question. To get this exception, the request rate for the stream is too high, or the requested data is too large for the available throughput. Reduce the frequency or size of your requests. So we can "Reduce the frequency and/or size of the requests" also decrease the size with "Use a PutRecord API instead of PutRecords" The API already implements retries with exponential backoff. So there is no need for A.
upvoted 3 times
eboehm2
4 months, 3 weeks ago
I thought this at first too, but I was doing some additional reading and using the PutRecord API over PutRecords is wrong as it could actually make the problem worse as producers may make too many rapid requests to write to the stream https://repost.aws/knowledge-center/kinesis-data-stream-throttling
upvoted 2 times
...
Majong
5 months, 1 week ago
Can you please add a link where I can find this information. From what I can read on AWS is that you can implement exponential backoff but it is not by default.
upvoted 1 times
...
...
Untamables
7 months, 2 weeks ago
Selected Answer: AC
A and C https://aws.amazon.com/premiumsupport/knowledge-center/kinesis-data-stream-throttling-errors/
upvoted 4 times
...
aragon_saa
7 months, 3 weeks ago
AC https://www.examtopics.com/discussions/amazon/view/69142-exam-aws-certified-developer-associate-topic-1-question-370/
upvoted 4 times
yashika2005
5 months, 1 week ago
thanks a lotttt!
upvoted 1 times
...
...
Question #30 Topic 1

A company has an application that uses Amazon Cognito user pools as an identity provider. The company must secure access to user records. The company has set up multi-factor authentication (MFA). The company also wants to send a login activity notification by email every time a user logs in.
What is the MOST operationally efficient solution that meets this requirement?

  • A. Create an AWS Lambda function that uses Amazon Simple Email Service (Amazon SES) to send the email notification. Add an Amazon API Gateway API to invoke the function. Call the API from the client side when login confirmation is received.
  • B. Create an AWS Lambda function that uses Amazon Simple Email Service (Amazon SES) to send the email notification. Add an Amazon Cognito post authentication Lambda trigger for the function.
  • C. Create an AWS Lambda function that uses Amazon Simple Email Service (Amazon SES) to send the email notification. Create an Amazon CloudWatch Logs log subscription filter to invoke the function based on the login status.
  • D. Configure Amazon Cognito to stream all logs to Amazon Kinesis Data Firehose. Create an AWS Lambda function to process the streamed logs and to send the email notification based on the login status of each user.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Bibay
Highly Voted 5 months, 3 weeks ago
B. Create an AWS Lambda function that uses Amazon Simple Email Service (Amazon SES) to send the email notification. Add an Amazon Cognito post authentication Lambda trigger for the function. The most operationally efficient solution for sending login activity notifications by email for Amazon Cognito user pools is to use a Lambda trigger that is automatically invoked by Amazon Cognito every time a user logs in. This eliminates the need for client-side calls to an API or log subscription filter. A Lambda function can be used to send email notifications using Amazon SES. Option B satisfies these requirements and is the most operationally efficient solution.
upvoted 6 times
...
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: B
B https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-lambda-post-authentication.html
upvoted 5 times
...
aragon_saa
Most Recent 7 months, 3 weeks ago
B https://www.examtopics.com/discussions/amazon/view/78944-exam-aws-certified-developer-associate-topic-1-question-9/
upvoted 3 times
...
Question #31 Topic 1

A developer has an application that stores data in an Amazon S3 bucket. The application uses an HTTP API to store and retrieve objects. When the PutObject API operation adds objects to the S3 bucket the developer must encrypt these objects at rest by using server-side encryption with Amazon S3 managed keys (SSE-S3).
Which solution will meet this requirement?

  • A. Create an AWS Key Management Service (AWS KMS) key. Assign the KMS key to the S3 bucket.
  • B. Set the x-amz-server-side-encryption header when invoking the PutObject API operation.
  • C. Provide the encryption key in the HTTP header of every request.
  • D. Apply TLS to encrypt the traffic to the S3 bucket.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (92%)
8%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
svrnvtr
Highly Voted 7 months, 2 weeks ago
Selected Answer: B
B https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingServerSideEncryption.html
upvoted 7 times
...
aanataliya
Highly Voted 2 months, 2 weeks ago
Answer for this question is changed starting January 5, 2023. Amazon S3 now applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon S3. https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html
upvoted 6 times
fordiscussionstwo
1 month ago
what is correct answer then?
upvoted 1 times
...
...
[Removed]
Most Recent 3 months, 3 weeks ago
Selected Answer: B
Header parameter "s3:x-amz-server-side-encryption": "AES256"
upvoted 3 times
...
tttamtttam
3 months, 3 weeks ago
Selected Answer: B
C is a way to use customer-provided keys not S3-managed keys.
upvoted 2 times
...
CisconAWSGURU
4 months, 2 weeks ago
Selected Answer: C
C is correct and hear is the reason from AWS docs. Visit AWS Regions and Endpoints in the AWS General Reference or the AWS Region Table to see the regional availability for ACM. Certificates in ACM are regional resources. To use a certificate with Elastic Load Balancing for the same fully qualified domain name (FQDN) or set of FQDNs in more than one AWS region, you must request or import a certificate for each region. For certificates provided by ACM, this means you must revalidate each domain name in the certificate for each region. You cannot copy a certificate between regions. To use an ACM certificate with Amazon CloudFront, you must request or import the certificate in the US East (N. Virginia) region. ACM certificates in this region that are associated with a CloudFront distribution are distributed to all the geographic locations configured for that distribution.
upvoted 1 times
...
Bibay
5 months, 3 weeks ago
B. Set the x-amz-server-side-encryption header when invoking the PutObject API operation. When using the PutObject API operation to store objects in an S3 bucket, the x-amz-server-side-encryption header can be set to specify the server-side encryption algorithm used to encrypt the object. Setting this header to "AES256" or "aws:kms" enables server-side encryption with SSE-S3 or SSE-KMS respectively. Option A is incorrect because assigning a KMS key to the S3 bucket will not enable SSE-S3 encryption. Option C is incorrect because providing the encryption key in the HTTP header of every request is not a valid way to enable SSE-S3 encryption. Option D is incorrect because applying TLS encryption to the traffic to the S3 bucket only encrypts the data in transit, but does not encrypt the objects at rest in the bucket.
upvoted 3 times
jipark
3 months ago
I now got to know 'KMS key to S3 bucket will not enable S3 encryption'
upvoted 1 times
...
...
Question #32 Topic 1

A developer needs to perform geographic load testing of an API. The developer must deploy resources to multiple AWS Regions to support the load testing of the API.
How can the developer meet these requirements without additional application code?

  • A. Create and deploy an AWS Lambda function in each desired Region. Configure the Lambda function to create a stack from an AWS CloudFormation template in that Region when the function is invoked.
  • B. Create an AWS CloudFormation template that defines the load test resources. Use the AWS CLI create-stack-set command to create a stack set in the desired Regions.
  • C. Create an AWS Systems Manager document that defines the resources. Use the document to create the resources in the desired Regions.
  • D. Create an AWS CloudFormation template that defines the load test resources. Use the AWS CLI deploy command to create a stack from the template in each Region.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (90%)
10%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
hsinchang
1 month, 4 weeks ago
in desired Regions better than in each Region.
upvoted 2 times
...
rlnd2000
2 months, 3 weeks ago
Selected Answer: C
If using Edge-Optimized endpoint, then the certificate must be in us-east-1 If using Regional endpoint, the certificate must be in the API Gateway region
upvoted 1 times
...
Bibay
5 months, 3 weeks ago
Selected Answer: B
B. Create an AWS CloudFormation template that defines the load test resources. Use the AWS CLI create-stack-set command to create a stack set in the desired Regions. AWS CloudFormation StackSets allow developers to deploy CloudFormation stacks across multiple AWS accounts and regions with a single CloudFormation template. By using the AWS CLI create-stack-set command, the developer can deploy the same CloudFormation stack to multiple regions without additional application code, thereby meeting the requirement for geographic load testing of an API.
upvoted 3 times
...
Untamables
7 months, 2 weeks ago
Selected Answer: B
B https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-concepts.html https://awscli.amazonaws.com/v2/documentation/api/2.1.30/reference/cloudformation/create-stack-set.html
upvoted 3 times
...
svrnvtr
7 months, 2 weeks ago
Selected Answer: B
B https://aws.amazon.com/ru/about-aws/whats-new/2021/04/deploy-cloudformation-stacks-concurrently-across-multiple-aws-regions-using-aws-cloudformation-stacksets/
upvoted 3 times
...
Question #33 Topic 1

A developer is creating an application that includes an Amazon API Gateway REST API in the us-east-2 Region. The developer wants to use Amazon CloudFront and a custom domain name for the API. The developer has acquired an SSL/TLS certificate for the domain from a third-party provider.
How should the developer configure the custom domain for the application?

  • A. Import the SSL/TLS certificate into AWS Certificate Manager (ACM) in the same Region as the API. Create a DNS A record for the custom domain.
  • B. Import the SSL/TLS certificate into CloudFront. Create a DNS CNAME record for the custom domain.
  • C. Import the SSL/TLS certificate into AWS Certificate Manager (ACM) in the same Region as the API. Create a DNS CNAME record for the custom domain.
  • D. Import the SSL/TLS certificate into AWS Certificate Manager (ACM) in the us-east-1 Region. Create a DNS CNAME record for the custom domain.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
D (79%)
C (21%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
brandon87
Highly Voted 7 months, 1 week ago
Selected Answer: D
To use a certificate in AWS Certificate Manager (ACM) to require HTTPS between viewers and CloudFront, make sure you request (or import) the certificate in the US East (N. Virginia) Region (us-east-1). https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cnames-and-https-requirements.html https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/CNAMEs.html
upvoted 16 times
...
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: D
The correct answer is D. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cnames-and-https-requirements.html https://docs.aws.amazon.com/acm/latest/userguide/import-certificate.html https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/CNAMEs.html
upvoted 7 times
...
Jonalb
Most Recent 1 week, 5 days ago
D. Importe o certificado SSL/TLS para o AWS Certificate Manager (ACM) na região us-east-1. Crie um registro DNS CNAME para o domínio personalizado.
upvoted 1 times
...
fossil123
2 months, 1 week ago
Selected Answer: D
AWS Region for AWS Certificate Manager To use a certificate in AWS Certificate Manager (ACM) to require HTTPS between viewers and CloudFront, make sure you request (or import) the certificate in the US East (N. Virginia) Region (us-east-1).
upvoted 1 times
...
ancomedian
3 months, 3 weeks ago
Selected Answer: D
I have checked at various places Answer is D Reason: ACM just can only import certificate in us-east-1 and we need to associate the imported certificate with us-east-2 The caused confusion regarding it is because of import and associate Crux: we will import in us-east-1 but use in us-east-2
upvoted 3 times
...
acordovam
3 months, 3 weeks ago
Selected Answer: D
D If you need to use CloudFront, then, you must import it into ue-east-1. https://docs.aws.amazon.com/acm/latest/userguide/import-certificate.html
upvoted 1 times
...
Pupina
4 months ago
Selected Answer: D A is not right because for cloudfront you create a CNMA not a DNS A https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/CNAMEs.html C is not right because ACM cannot import certificates in us-east-2 https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cnames-and-https-requirements.html B is not right. The certificate is for an external CA but can be uploaded to ACM or you must request a public certificate from AWS certificate Manager https://repost.aws/knowledge-center/install-ssl-cloudfront but you cannot import the certificate into CloudFront
upvoted 1 times
...
rlnd2000
4 months, 3 weeks ago
Selected Answer: C
C The first statement of the question: A developer is creating an application that includes an Amazon API Gateway REST API in the us-east-2 Region. ... it is a Regional API, when using a Regional endpoint, the SSL/TLS certificate for the custom domain must be imported into AWS Certificate Manager (ACM) in the same Region as the API, only if we use g Edge-Optimized endpoint, the certificate must be in us-east-1.
upvoted 2 times
...
peterpain
5 months, 2 weeks ago
Selected Answer: D
The ACM has to be implemented at US-East-1
upvoted 2 times
...
Bibay
5 months, 3 weeks ago
Selected Answer: C
To use Amazon CloudFront and a custom domain name for an Amazon API Gateway REST API, the developer should import the SSL/TLS certificate into AWS Certificate Manager (ACM) in the same Region as the API, and create a DNS CNAME record for the custom domain. This is because AWS Certificate Manager can only issue SSL/TLS certificates in the same Region as the API, and a DNS CNAME record maps the custom domain to the CloudFront distribution. Option A is incorrect because a DNS A record is not sufficient to map the custom domain to the CloudFront distribution. Option B is incorrect because AWS Certificate Manager must issue the SSL/TLS certificate in the same Region as the API. Option D is incorrect because the SSL/TLS certificate must be issued in the same Region as the API, and a DNS CNAME record is required to map the custom domain to the CloudFront distribution.
upvoted 4 times
...
KhyatiChhajed
6 months ago
Selected Answer: C
C. Import the SSL/TLS certificate into AWS Certificate Manager (ACM) in the same Region as the API. Create a DNS CNAME record for the custom domain. Explanation: Amazon CloudFront can use SSL/TLS certificates stored in AWS Certificate Manager (ACM) to provide secure HTTPS connections for custom domain names. In this scenario, the developer should import the SSL/TLS certificate acquired from a third-party provider into ACM in the same Region as the API (us-east-2 in this case). This allows the certificate to be used by CloudFront.
upvoted 1 times
...
hanJR
6 months, 2 weeks ago
It's D. It is trying to integrate with CloudFront, therefore it must upload certificates in us-east-1. If it was a regional API, then certificates must be uploaded in the same region of the API Gateway.
upvoted 1 times
...
March2023
7 months, 2 weeks ago
Selected Answer: C
I was thinking this answer would be C
upvoted 1 times
...
Question #34 Topic 1

A developer is creating a template that uses AWS CloudFormation to deploy an application. The application is serverless and uses Amazon API Gateway, Amazon DynamoDB, and AWS Lambda.
Which AWS service or tool should the developer use to define serverless resources in YAML?

  • A. CloudFormation serverless intrinsic functions
  • B. AWS Elastic Beanstalk
  • C. AWS Serverless Application Model (AWS SAM)
  • D. AWS Cloud Development Kit (AWS CDK)
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Bibay
Highly Voted 5 months, 3 weeks ago
The recommended AWS service for defining serverless resources in YAML is the AWS Serverless Application Model (AWS SAM). AWS SAM is an open-source framework that extends AWS CloudFormation to provide a simplified way to define the Amazon API Gateway APIs, AWS Lambda functions, and Amazon DynamoDB tables needed by your serverless application. You can define your serverless resources in a YAML template and then use the AWS SAM CLI to package and deploy your application. AWS CloudFormation serverless intrinsic functions can also be used to define serverless resources in YAML, but they have some limitations compared to AWS SAM. AWS Elastic Beanstalk is a platform as a service (PaaS) that is not serverless specific, while the AWS Cloud Development Kit (AWS CDK) is an alternative to YAML-based templates that uses familiar programming languages like TypeScript, Python, and Java to define AWS infrastructure.
upvoted 10 times
jipark
3 months ago
your explanation helps me a lot !
upvoted 2 times
...
...
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: C
C https://aws.amazon.com/serverless/sam/
upvoted 5 times
...
Jonalb
Most Recent 1 week, 5 days ago
O AWS Serverless Application Model (AWS SAM) é uma extensão do AWS CloudFormation que facilita a definição de aplicações sem servidor. AWS SAM fornece modelos mais simples para configurar recursos sem servidor como AWS Lambda, Amazon API Gateway e Amazon DynamoDB. Os modelos podem ser definidos em YAML ou JSON. C
upvoted 1 times
...
svrnvtr
7 months, 2 weeks ago
Selected Answer: C
C is the answer
upvoted 3 times
...
Question #35 Topic 1

A developer wants to insert a record into an Amazon DynamoDB table as soon as a new file is added to an Amazon S3 bucket.
Which set of steps would be necessary to achieve this?

  • A. Create an event with Amazon EventBridge that will monitor the S3 bucket and then insert the records into DynamoDB.
  • B. Configure an S3 event to invoke an AWS Lambda function that inserts records into DynamoDB.
  • C. Create an AWS Lambda function that will poll the S3 bucket and then insert the records into DynamoDB.
  • D. Create a cron job that will run at a scheduled time and insert the records into DynamoDB.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Bibay
Highly Voted 5 months, 3 weeks ago
The correct answer is B. To insert a record into DynamoDB as soon as a new file is added to an S3 bucket, you can configure an S3 event notification to invoke an AWS Lambda function that inserts the records into DynamoDB. When a new file is added to the S3 bucket, the S3 event notification will trigger the Lambda function, which will insert the record into the DynamoDB table. Option A is incorrect because Amazon EventBridge is not necessary to achieve this. S3 event notifications can directly invoke a Lambda function to insert records into DynamoDB. Option C is incorrect because polling the S3 bucket periodically to check for new files is inefficient and not necessary with S3 event notifications. Option D is incorrect because running a cron job at a scheduled time is not real-time and would not insert the record into DynamoDB as soon as a new file is added to the S3 bucket.
upvoted 8 times
...
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: B
B https://docs.aws.amazon.com/AmazonS3/latest/userguide/NotificationHowTo.html
upvoted 6 times
...
svrnvtr
Most Recent 7 months, 2 weeks ago
It is B
upvoted 4 times
...
Question #36 Topic 1

A development team maintains a web application by using a single AWS CloudFormation template. The template defines web servers and an Amazon RDS database. The team uses the Cloud Formation template to deploy the Cloud Formation stack to different environments.
During a recent application deployment, a developer caused the primary development database to be dropped and recreated. The result of this incident was a loss of data. The team needs to avoid accidental database deletion in the future.
Which solutions will meet these requirements? (Choose two.)

  • A. Add a CloudFormation Deletion Policy attribute with the Retain value to the database resource.
  • B. Update the CloudFormation stack policy to prevent updates to the database.
  • C. Modify the database to use a Multi-AZ deployment.
  • D. Create a CloudFormation stack set for the web application and database deployments.
  • E. Add a Cloud Formation DeletionPolicy attribute with the Retain value to the stack.
Reveal Solution Hide Solution

Correct Answer: AD 🗳️

Community vote distribution
AB (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Mtho96
Highly Voted 4 months ago
A. Add a CloudFormation Deletion Policy attribute with the Retain value to the database resource: By adding a DeletionPolicy attribute with the Retain value to the database resource in the CloudFormation template, the database will not be deleted even if the CloudFormation stack is deleted. This helps prevent accidental database loss during stack deletion. B. Update the CloudFormation stack policy to prevent updates to the database: By updating the CloudFormation stack policy, the development team can restrict updates to the database resource. This prevents accidental modifications or recreations of the database during stack updates. The stack policy can define specific actions that are allowed or denied, providing an additional layer of protection against unintentional database changes.
upvoted 6 times
...
svrnvtr
Highly Voted 7 months, 2 weeks ago
Selected Answer: AB
AB https://aws.amazon.com/ru/premiumsupport/knowledge-center/cloudformation-accidental-updates/
upvoted 6 times
...
Jonalb
Most Recent 1 week, 4 days ago
Selected Answer: AB
https://aws.amazon.com/ru/premiumsupport/knowledge-center/cloudformation-accidental-updates/
upvoted 1 times
...
magicjims
1 month, 4 weeks ago
Selected Answer: AB
This came up in the exam today, I chose A&B
upvoted 2 times
...
panoptica
1 month, 4 weeks ago
D & A for me
upvoted 1 times
...
nguyenta
3 months, 3 weeks ago
Selected Answer: AB
A and B
upvoted 2 times
...
marvel21
4 months, 4 weeks ago
A & B Correct Answer
upvoted 2 times
...
s50600822
5 months ago
D because grandma said?
upvoted 2 times
...
Japanjot
6 months, 1 week ago
A B CORRECT
upvoted 1 times
...
ihebchorfi
6 months, 1 week ago
Selected Answer: AB
D is wrong, because while it still doesn't protect from the accidental deletion of the DB.
upvoted 1 times
ihebchorfi
6 months, 1 week ago
After more thinking, combining A & D is the correct answer, so i would go with AD
upvoted 2 times
...
...
Untamables
7 months, 2 weeks ago
Selected Answer: AB
A and B https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html
upvoted 4 times
...
March2023
7 months, 2 weeks ago
Selected Answer: AB
I agree it is AB
upvoted 3 times
...
Question #37 Topic 1

A company has an Amazon S3 bucket that contains sensitive data. The data must be encrypted in transit and at rest. The company encrypts the data in the S3 bucket by using an AWS Key Management Service (AWS KMS) key. A developer needs to grant several other AWS accounts the permission to use the S3 GetObject operation to retrieve the data from the S3 bucket.
How can the developer enforce that all requests to retrieve the data provide encryption in transit?

  • A. Define a resource-based policy on the S3 bucket to deny access when a request meets the condition “aws:SecureTransport”: “false”.
  • B. Define a resource-based policy on the S3 bucket to allow access when a request meets the condition “aws:SecureTransport”: “false”.
  • C. Define a role-based policy on the other accounts' roles to deny access when a request meets the condition of “aws:SecureTransport”: “false”.
  • D. Define a resource-based policy on the KMS key to deny access when a request meets the condition of “aws:SecureTransport”: “false”.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (93%)
7%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: A
A https://repost.aws/knowledge-center/s3-bucket-policy-for-config-rule
upvoted 7 times
...
Watascript
Highly Voted 7 months, 2 weeks ago
Selected Answer: A
A is correct.
upvoted 5 times
...
CrescentShared
Most Recent 2 weeks, 6 days ago
Selected Answer: D
Hesitate between A and D. Question is not clear on weather we want to block all the information or only the sensitive part.
upvoted 1 times
...
winzzhhzzhh
2 months ago
I know A is correct but D seems correct as well, since users will need access to the KMS key to decrypt the data in the bucket.
upvoted 1 times
...
Malkia
6 months ago
Selected Answer: A
A is correct.
upvoted 1 times
...
Question #38 Topic 1

An application that is hosted on an Amazon EC2 instance needs access to files that are stored in an Amazon S3 bucket. The application lists the objects that are stored in the S3 bucket and displays a table to the user. During testing, a developer discovers that the application does not show any objects in the list.
What is the MOST secure way to resolve this issue?

  • A. Update the IAM instance profile that is attached to the EC2 instance to include the S3:* permission for the S3 bucket.
  • B. Update the IAM instance profile that is attached to the EC2 instance to include the S3:ListBucket permission for the S3 bucket.
  • C. Update the developer's user permissions to include the S3:ListBucket permission for the S3 bucket.
  • D. Update the S3 bucket policy by including the S3:ListBucket permission and by setting the Principal element to specify the account number of the EC2 instance.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (75%)
A (25%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: B
The correct answer is B. https://repost.aws/knowledge-center/ec2-instance-access-s3-bucket Option A also works, but it is not compliant to the AWS security practice of the least privilege permissions.
upvoted 7 times
yeacuz
5 months, 3 weeks ago
Option B only allows you to list the bucket - you will still not see the objects if only s3:ListBucket permission is configured.
upvoted 2 times
...
...
yeacuz
Highly Voted 6 months ago
Selected Answer: A
Option A allows you to list buckets AND objects. Option B only allows you to list the bucket - you will still not see the objects if only s3:ListBucket permission is configured.
upvoted 5 times
Jeremy11
3 months, 1 week ago
Not true: https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html To use this action in an AWS Identity and Access Management (IAM) policy, you must have permission to perform the s3:ListBucket action.
upvoted 2 times
...
...
ninomfr64
Most Recent 2 months, 2 weeks ago
Selected Answer: B
It is B, but I had to dig into docs to learn that to use ListObjectsV2, in an AWS Identity and Access Management (IAM) policy, you must have permission to perform the s3:ListBucket action. https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html
upvoted 1 times
...
ashish_roy
2 months, 3 weeks ago
Can someone email me a pdf of the questions (DVA-C02 & DVA-C01) at qwerty19roy@gmail.com Thanks in advance!
upvoted 2 times
...
jipark
3 months ago
are there anyone who can explain D ? - S3 bucket policy
upvoted 3 times
nmc12
1 month, 1 week ago
Option D is not the most secure choice, as utilizing bucket policies and specifying account numbers can potentially lead to overly complex and less secure configurations, especially if not managed carefully. To implement option B, follow these and it most secure!!! { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::your-bucket-name" } ] }
upvoted 1 times
...
...
s50600822
5 months ago
A violated least privilege principle so B
upvoted 3 times
...
yashika2005
5 months, 1 week ago
Selected Answer: B
the s3:ListBucket permission allows the user to use the Amazon S3 GET Bucket (List Objects) operation. Reference: https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-policy-language-overview.html
upvoted 3 times
...
yashika2005
5 months, 1 week ago
the s3:ListBucket permission allows the user to use the Amazon S3 GET Bucket (List Objects) operation. Reference: https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-policy-language-overview.html
upvoted 1 times
...
svrnvtr
7 months, 2 weeks ago
Selected Answer: B
It is B
upvoted 4 times
...
Question #39 Topic 1

A company is planning to securely manage one-time fixed license keys in AWS. The company's development team needs to access the license keys in automaton scripts that run in Amazon EC2 instances and in AWS CloudFormation stacks.
Which solution will meet these requirements MOST cost-effectively?

  • A. Amazon S3 with encrypted files prefixed with “config”
  • B. AWS Secrets Manager secrets with a tag that is named SecretString
  • C. AWS Systems Manager Parameter Store SecureString parameters
  • D. CloudFormation NoEcho parameters
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
hanJR
Highly Voted 6 months, 2 weeks ago
I chose C because AWS Secrets Manager does auto key rotation(The question says that the key is one-time fixed).
upvoted 10 times
...
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: C
C 'https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html
upvoted 7 times
...
alohayo
Most Recent 1 month, 3 weeks ago
Both B and C are feasible solutions. Just consider the "MOST cost effectively" here. AWS Systems Manager Parameter Store comes with no additional cost (Standard type). However, AWS Secrets Manager costs $0.40 per secret per month, and data retrieval costs $0.05 per 10,000 API calls. C is much cheaper, guy.
upvoted 5 times
...
s50600822
5 months ago
PS prob is free for this use case https://docs.aws.amazon.com/systems-manager/latest/userguide/parameter-store-advanced-parameters.html, even though SM cost may also count to nothing(due to the scale of the use case and caching client). Again the only notable difference is the aforementioned irrelevant tag.
upvoted 2 times
...
Question #40 Topic 1

A company has deployed infrastructure on AWS. A development team wants to create an AWS Lambda function that will retrieve data from an Amazon Aurora database. The Amazon Aurora database is in a private subnet in company's VPC. The VPC is named VPC1. The data is relational in nature. The Lambda function needs to access the data securely.
Which solution will meet these requirements?

  • A. Create the Lambda function. Configure VPC1 access for the function. Attach a security group named SG1 to both the Lambda function and the database. Configure the security group inbound and outbound rules to allow TCP traffic on Port 3306.
  • B. Create and launch a Lambda function in a new public subnet that is in a new VPC named VPC2. Create a peering connection between VPC1 and VPC2.
  • C. Create the Lambda function. Configure VPC1 access for the function. Assign a security group named SG1 to the Lambda function. Assign a second security group named SG2 to the database. Add an inbound rule to SG1 to allow TCP traffic from Port 3306.
  • D. Export the data from the Aurora database to Amazon S3. Create and launch a Lambda function in VPC1. Configure the Lambda function query the data from Amazon S3.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
A (61%)
C (30%)
9%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
shahs10
Highly Voted 7 months, 1 week ago
Selected Answer: A
Correct Answer is Answer A For B creating new VPC for lambda does not seems a suitable solution For C Assigning differrent security groups to both will not work Option D will not be suitable for relational data and involve S3 in solution
upvoted 6 times
...
Watascript
Highly Voted 7 months, 2 weeks ago
Selected Answer: A
A? https://repost.aws/en/knowledge-center/connect-lambda-to-an-rds-instance
upvoted 6 times
...
quanghao
Most Recent 1 week, 2 days ago
Selected Answer: B
A Lambda function and RDS instance in different VPCs First, use VPC peering to connect the two VPCs. Then, use the networking configurations to connect the Lambda function in one VPC to the RDS instance in the other:
upvoted 2 times
...
hcsaba1982
2 weeks, 4 days ago
Selected Answer: B
This is the only one where lambda can reach the Database anyway, seems to me a prerequisite if the VPC was mentioned. Lambda by default, launched outside your VPC (in an AWS-owned VPC) so it cannot access resources.
upvoted 1 times
...
dexdinh91
3 weeks ago
Selected Answer: B
B is correct?
upvoted 1 times
...
quanbui
3 weeks, 5 days ago
Selected Answer: C
C, need 2 SG
upvoted 1 times
...
sofiatian
1 month, 2 weeks ago
Selected Answer: C
Need two security groups. One is for Lambda function. The other one is for DB
upvoted 1 times
...
hsinchang
1 month, 4 weeks ago
A. right B. public, unsecure C. excessive connections D. additional cost and complexity
upvoted 2 times
...
love777
2 months, 1 week ago
Selected Answer: A
VPC Configuration: Ensure that your Lambda function is configured to run within the same VPC where your Amazon Aurora database resides (VPC1 in this case). Configure the Lambda function to use the appropriate subnets within VPC1, which are associated with the private subnet where your Amazon Aurora database is located. Security Groups: Attach a security group (SG1) to both the Lambda function and the Amazon Aurora database. Configure the security group inbound rules for SG1 to allow incoming TCP traffic on Port 3306, which is the default port for MySQL (used by Aurora). This will allow communication between the Lambda function and the database. Outbound rules should be allowed by default, so you don't need to make any changes there.
upvoted 1 times
...
ninomfr64
2 months, 2 weeks ago
Selected Answer: A
There isn't the ideal solution to the use case among the options. B) no need to create a new VPC and also you need to add route tables and configure SGs to make it works C) this could work if the rule on SG1 was outbound instead of inbound (the connection is initiated from Lambda to Aurora) D) export data to S3 is overkill and if you do that you no longer need to deploy the lambda in the VPC A) works, as SG1 is attached to both Lambda and Aurora we need outbound rule to 3306 (Lambda initiate communication to Aurora) and also inbound rule from 3306 (to allow Aurora accept connection from Lambda). I don't like to have the same SG1 for both the Lambda and the Aurora
upvoted 4 times
...
AWSdeveloper08
3 months, 2 weeks ago
Selected Answer: C
https://www.youtube.com/watch?v=UgWjbSixRg4&ab_channel=DevProblems
upvoted 2 times
...
ancomedian
3 months, 3 weeks ago
Selected Answer: C
The correct answer is C https://www.youtube.com/watch?v=UgWjbSixRg4
upvoted 3 times
...
awsazedevsh
4 months, 1 week ago
It seems it is A but as I know we don’t need to create outbound rules when we return something. So why it is A ?
upvoted 1 times
awsazedevsh
4 months ago
Nevermind. We need it to let Lambda to make outbound request
upvoted 2 times
...
...
awsazedevsh
4 months, 1 week ago
It seems it is A but as I know we don’t need to create outbound rules when we return something. So why it is A ?
upvoted 1 times
...
umer1998
4 months, 2 weeks ago
The correct answer is C https://www.youtube.com/watch?v=UgWjbSixRg4
upvoted 1 times
umer1998
4 months, 2 weeks ago
For B (There is no need to create another VPC, since we can simply add a lambda to a VPC with private subnets) For A (Security Group (SG) is stateless. By using NACL we can do outbound and inbound rules modification + SG is used to give access, if you keep both Lambda and DB in same same SG, if you try to give access of lambda to another resource, that another resource will automatically gets the RDS access - which is out of question)
upvoted 2 times
...
...
rlnd2000
4 months, 3 weeks ago
Selected Answer: C
C is correct, A is a wrong choice, how to config outbound rules in SG? :)
upvoted 1 times
...
kavi00203
4 months, 3 weeks ago
I think B , please verify this guys, https://repost.aws/en/knowledge-center/connect-lambda-to-an-rds-instance#:~:text=Lambda%27s%20subnets%27%20CIDRs.-,A%20Lambda%20function%20and%20RDS%20instance%20in%20different%20VPCs,function%20in%20one%20VPC%20to%20the%20RDS%20instance%20in%20the%20other,-%3A
upvoted 2 times
...
Question #41 Topic 1

A developer is building a web application that uses Amazon API Gateway to expose an AWS Lambda function to process requests from clients. During testing, the developer notices that the API Gateway times out even though the Lambda function finishes under the set time limit.
Which of the following API Gateway metrics in Amazon CloudWatch can help the developer troubleshoot the issue? (Choose two.)

  • A. CacheHitCount
  • B. IntegrationLatency
  • C. CacheMissCount
  • D. Latency
  • E. Count
Reveal Solution Hide Solution

Correct Answer: BD 🗳️

Community vote distribution
BD (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: BD
B and D The issue is caused by timeout. So the developer needs to know the latency information. https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-metrics-and-dimensions.html https://repost.aws/knowledge-center/api-gateway-rest-api-504-errors
upvoted 8 times
...
Watascript
Highly Voted 7 months, 2 weeks ago
Selected Answer: BD
https://docs.aws.amazon.com/apigateway/latest/developerguide/monitoring-cloudwatch.html
upvoted 5 times
...
Jonalb
Most Recent 1 week, 4 days ago
Selected Answer: BD
As melhores opções são, portanto, B. IntegraçãoLatência e D. Latência. Ambas as métricas fornecerão insights sobre onde pode estar ocorrendo a latência ou o atraso, ajudando o desenvolvedor a solucionar o problema.
upvoted 1 times
...
Question #42 Topic 1

A development team wants to build a continuous integration/continuous delivery (CI/CD) pipeline. The team is using AWS CodePipeline to automate the code build and deployment. The team wants to store the program code to prepare for the CI/CD pipeline.
Which AWS service should the team use to store the program code?

  • A. AWS CodeDeploy
  • B. AWS CodeArtifact
  • C. AWS CodeCommit
  • D. Amazon CodeGuru
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: C
C https://aws.amazon.com/codecommit/
upvoted 5 times
...
Lucian2407
Most Recent 2 months, 2 weeks ago
Selected Answer: C
Simple answer: CodeCommit
upvoted 2 times
...
jgopireddy
7 months, 1 week ago
Selected Answer: C
C is the right answer
upvoted 4 times
...
Question #43 Topic 1

A developer is designing an AWS Lambda function that creates temporary files that are less than 10 MB during invocation. The temporary files will be accessed and modified multiple times during invocation. The developer has no need to save or retrieve these files in the future.
Where should the temporary files be stored?

  • A. the /tmp directory
  • B. Amazon Elastic File System (Amazon EFS)
  • C. Amazon Elastic Block Store (Amazon EBS)
  • D. Amazon S3
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: A
A A Lambda function has access to local storage in the /tmp directory. Each execution environment provides between 512 MB and 10,240 MB, in 1-MB increments, of disk space in the /tmp directory. https://docs.aws.amazon.com/lambda/latest/dg/foundation-progmodel.html
upvoted 12 times
...
Mtho96
Most Recent 4 months ago
The correct answer is A The /tmp directory is the recommended location for storing temporary files within an AWS Lambda function. The /tmp directory provides a writable space with a local storage capacity of 512 MB. It is specifically designed for temporary storage within the Lambda execution environment.
upvoted 2 times
...
Question #44 Topic 1

A developer is designing a serverless application with two AWS Lambda functions to process photos. One Lambda function stores objects in an Amazon S3 bucket and stores the associated metadata in an Amazon DynamoDB table. The other Lambda function fetches the objects from the S3 bucket by using the metadata from the DynamoDB table. Both Lambda functions use the same Python library to perform complex computations and are approaching the quota for the maximum size of zipped deployment packages.
What should the developer do to reduce the size of the Lambda deployment packages with the LEAST operational overhead?

  • A. Package each Python library in its own .zip file archive. Deploy each Lambda function with its own copy of the library.
  • B. Create a Lambda layer with the required Python library. Use the Lambda layer in both Lambda functions.
  • C. Combine the two Lambda functions into one Lambda function. Deploy the Lambda function as a single .zip file archive.
  • D. Download the Python library to an S3 bucket. Program the Lambda functions to reference the object URLs.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: B
B https://docs.aws.amazon.com/lambda/latest/dg/invocation-layers.html
upvoted 7 times
...
Ponyi
Most Recent 2 days, 20 hours ago
Whenever you see "to make deployment package smaller" -----> Layers
upvoted 1 times
...
Mtho96
4 months ago
B creating a Lambda layer with the required Python library and using it in both Lambda functions, is the most suitable solution for reducing the size of the deployment packages with minimal operational overhead. https://docs.aws.amazon.com/lambda/latest/dg/invocation-layers.html
upvoted 3 times
...
Baba_Eni
5 months ago
Selected Answer: B
https://docs.aws.amazon.com/lambda/latest/dg/invocation-layers.html
upvoted 3 times
...
Question #45 Topic 1

A developer is writing an AWS Lambda function. The developer wants to log key events that occur while the Lambda function runs. The developer wants to include a unique identifier to associate the events with a specific function invocation. The developer adds the following code to the Lambda function:

Which solution will meet this requirement?

  • A. Obtain the request identifier from the AWS request ID field in the context object. Configure the application to write logs to standard output.
  • B. Obtain the request identifier from the AWS request ID field in the event object. Configure the application to write logs to a file.
  • C. Obtain the request identifier from the AWS request ID field in the event object. Configure the application to write logs to standard output.
  • D. Obtain the request identifier from the AWS request ID field in the context object. Configure the application to write logs to a file.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
A (90%)
10%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: A
A https://docs.aws.amazon.com/lambda/latest/dg/nodejs-context.html https://docs.aws.amazon.com/lambda/latest/dg/nodejs-logging.html There is no explicit information for the runtime, the code is written in Node.js.
upvoted 7 times
Pupina
3 months, 4 weeks ago
• https://docs.aws.amazon.com/prescriptive-guidance/latest/implementing-logging-monitoring-cloudwatch/lambda-logging-metrics.html • Lambda automatically streams standard output and standard error messages from a Lambda function to CloudWatch Logs, without requiring logging drivers.
upvoted 2 times
...
...
ninomfr64
Highly Voted 2 months, 2 weeks ago
Selected Answer: A
Both A and D could work here, as both rely on the context object to get access to execution ID https://docs.aws.amazon.com/us_en/lambda/latest/dg/python-context.html While A uses stoud to send log to CloudWatch Log, D writes to a file. D is less specific (where is the file stored? A single file for each execution?) and looks more comples (manage file(s), manage concurrency access to the file ...), thus I'll go for A
upvoted 5 times
...
hsinchang
Most Recent 1 month, 4 weeks ago
invocation is in the Context object, and loggging into Standard output, which goes into CloudWatch(more durable, more scalable, etc.), is generally better than using temporary Files
upvoted 1 times
...
Pupina
3 months, 4 weeks ago
Selected Answer A: Handler function https://docs.aws.amazon.com/lambda/latest/dg/nodejs-handler.html Context object awsRequestId – The identifier of the invocation request. https://docs.aws.amazon.com/lambda/latest/dg/nodejs-context.html
upvoted 1 times
...
rlnd2000
4 months, 2 weeks ago
Selected Answer: A
In my opinion both options A and D can fulfill the requirement, since there is no requirement about any specific logging and monitoring tool I will go with defaults (A) because, simple is better than complex :)
upvoted 1 times
...
Prem28
5 months, 3 weeks ago
Selected Answer: A
The application can write logs to standard output or to a file. Standard output is the default destination for logs. Logs that are written to standard output are sent to Amazon CloudWatch Logs. Logs that are written to a file are stored on the Lambda function's execution environment.
upvoted 3 times
...
Nagendhar
5 months, 4 weeks ago
Ans: D The code snippet provided in the question is obtaining the request identifier from the context.awsRequestId property, which is available in the context object provided to the Lambda function handler. Therefore, the correct option is: D. Obtain the request identifier from the AWS request ID field in the context object. Configure the application to write logs to a file. This option meets the requirement of logging key events and including a unique identifier to associate the events with a specific function invocation.
upvoted 1 times
...
Rpod
6 months, 2 weeks ago
Selected Answer: D
Why not D ? Writing logs to a file seems more appropriate than stdout
upvoted 3 times
...
Watascript
7 months, 2 weeks ago
Selected Answer: A
https://docs.aws.amazon.com/us_en/lambda/latest/dg/python-context.html https://docs.aws.amazon.com/us_en/lambda/latest/dg/python-logging.html
upvoted 4 times
...
Dun6
7 months, 2 weeks ago
Selected Answer: A
A it is
upvoted 3 times
...
March2023
7 months, 2 weeks ago
Selected Answer: A
I think the answer is A
upvoted 3 times
...
Question #46 Topic 1

A developer is working on a serverless application that needs to process any changes to an Amazon DynamoDB table with an AWS Lambda function.
How should the developer configure the Lambda function to detect changes to the DynamoDB table?

  • A. Create an Amazon Kinesis data stream, and attach it to the DynamoDB table. Create a trigger to connect the data stream to the Lambda function.
  • B. Create an Amazon EventBridge rule to invoke the Lambda function on a regular schedule. Conned to the DynamoDB table from the Lambda function to detect changes.
  • C. Enable DynamoDB Streams on the table. Create a trigger to connect the DynamoDB stream to the Lambda function.
  • D. Create an Amazon Kinesis Data Firehose delivery stream, and attach it to the DynamoDB table. Configure the delivery stream destination as the Lambda function.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: C
C https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.html
upvoted 8 times
...
nmc12
Most Recent 1 month ago
Selected Answer: C
C Enabling DynamoDB Streams on the table allows you to capture and process changes (inserts, updates, deletes) to the table in real-time. You can then create a Lambda trigger that listens to the DynamoDB stream and invokes the Lambda function whenever there is a change in the table. This is a common and effective way to react to changes in DynamoDB tables with AWS Lambda functions.
upvoted 2 times
...
Baba_Eni
5 months ago
Selected Answer: C
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html
upvoted 2 times
...
Question #47 Topic 1

An application uses an Amazon EC2 Auto Scaling group. A developer notices that EC2 instances are taking a long time to become available during scale-out events. The UserData script is taking a long time to run.
The developer must implement a solution to decrease the time that elapses before an EC2 instance becomes available. The solution must make the most recent version of the application available at all times and must apply all available security updates. The solution also must minimize the number of images that are created. The images must be validated.
Which combination of steps should the developer take to meet these requirements? (Choose two.)

  • A. Use EC2 Image Builder to create an Amazon Machine Image (AMI). Install all the patches and agents that are needed to manage and run the application. Update the Auto Scaling group launch configuration to use the AMI.
  • B. Use EC2 Image Builder to create an Amazon Machine Image (AMI). Install the latest version of the application and all the patches and agents that are needed to manage and run the application. Update the Auto Scaling group launch configuration to use the AMI.
  • C. Set up AWS CodeDeploy to deploy the most recent version of the application at runtime.
  • D. Set up AWS CodePipeline to deploy the most recent version of the application at runtime.
  • E. Remove any commands that perform operating system patching from the UserData script.
Reveal Solution Hide Solution

Correct Answer: AB 🗳️

Community vote distribution
AC (46%)
AE (40%)
12%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
imvb88
Highly Voted 5 months, 2 weeks ago
Selected Answer: AE
Why choose A over B? Problem is that B will tie an AMI with a specific version, so if there is a new version, we need to create a new AMI, and that contradicts with "minimize the number of images that are created". Then E over C, D? E is obviously complementary to A, where removing commands from User Data will make the instance booting process much faster (and of course with A you don't need that anymore). C and D also works but 1/not complementary with any other options; 2/CodeDeploy takes time to execute. Hope this helps somebody struggling with this question.
upvoted 19 times
minh12312312
1 week, 5 days ago
The solution must make the most recent version of the application available at all times
upvoted 1 times
...
r3mo
3 months, 2 weeks ago
And what about this requisit? "The solution must make the most recent version of the application available at all times". Only the Answer B fulfill this part.
upvoted 3 times
...
yashika2005
5 months ago
thanksss a lott!
upvoted 1 times
...
...
KillThemWithKindness
Highly Voted 3 months, 1 week ago
Selected Answer: AC
Option E, which suggests removing operating system patching from the UserData script, might reduce the startup time. But this could leave your instances unpatched and vulnerable, which doesn't meet the requirement to apply all available security updates.
upvoted 9 times
...
ronn555
Most Recent 17 hours, 9 minutes ago
Selected Answer: AC
A is correct. C vs E. C satisfies latest software req. E contradicts latest patch req., it is red herring to A bc you think that patches are unnecessary on a patched image, but they will eventually be.
upvoted 1 times
...
Jonalb
1 week, 4 days ago
Selected Answer: AC
A. Use o EC2 Image Builder para criar uma Amazon Machine Image (AMI). Instale todos os patches e agentes necessários para gerenciar e executar o aplicativo. Atualize a configuração de inicialização do grupo do Auto Scaling para usar a AMI. C. Configure o AWS CodeDeploy para implantar a versão mais recente do aplicativo em tempo de execução.
upvoted 1 times
...
Rameez1
3 weeks, 3 days ago
Selected Answer: AC
If I look for eliminating options which contradicts with the requirements BDE gets eliminated as below: B: Would need to recreate AMI for every version update (As per the requirement we need to minimize image creations) -> On contrary A will boost faster with all necessary packages and minimum number of AMI creations. D: Code pipeline can't deploy code of its own and would need code deploy for doing it -> Making C a better choice. E: User script is necessary for security updates.
upvoted 1 times
...
Cerakoted
3 weeks, 5 days ago
Selected Answer: AC
I think AC Why not AE? -> "must apply all available security updates" on the question. need to update OS with userdata script
upvoted 1 times
...
Die_fa_ed
1 month, 1 week ago
Selected Answer: AC
- Option B: Use EC2 Image Builder to create an Amazon Machine Image (AMI) that includes the latest version of the application and all necessary patches and agents. This ensures that the AMI is up-to-date and ready to use. Then, update the Auto Scaling group launch configuration to use this AMI. - Option C: Set up AWS CodeDeploy to deploy the most recent version of the application at runtime. CodeDeploy allows you to easily manage and deploy application updates without creating new AMIs. This helps ensure that the most recent version of the application is available without the need to recreate AMIs. These steps minimize the number of images created (as you update the AMI when necessary) and allow for efficient updates of the application while ensuring security patches and updates are applied.
upvoted 1 times
...
appuNBablu
1 month, 2 weeks ago
I would say AC, but I see many answers AE. How AE is answer? the question says we need solution that also deploys latest code?
upvoted 2 times
...
Kashan6109
2 months ago
Selected Answer: BE
Option A is not correct because we need most recent version of application as well which is only fulfilled by Option B
upvoted 1 times
...
love777
2 months, 1 week ago
Selected Answer: AC
Option E, which suggests removing any commands that perform operating system patching from the UserData script, might not be the best idea for ensuring the security and stability of your EC2 instances and application. Here's why it could be considered a bad idea: Security Vulnerabilities: Operating system patches often include security updates that address known vulnerabilities and protect your instances from potential threats. By removing patching from the UserData script, you might leave your instances exposed to security risks.
upvoted 3 times
...
ninomfr64
2 months, 2 weeks ago
Selected Answer: AC
A) makes sure the instaces boot faster by having all patches and dependencies baked into the AMI (B would make it too, but would create a new AMI for any new app version thus conflicting with requirement "minimize the number of images that are created") C) When new EC2 instances are launched as part of an Auto Scaling group, CodeDeploy can deploy your revisions to the new instances automatically. This will meet the requirement to "make the most recent application version available all the time"
upvoted 5 times
...
jipark
3 months ago
Selected Answer: BE
B. Using EC2 Image Builder to create an AMI ensures that the most recent version of the application, along with all necessary patches and agents, is pre-installed in the image. This reduces the time required during the scaling events because instances launched from this AMI will already have the application and updates in place. E. Removing operating system patching commands from the UserData script is essential because, during scale-out events, the UserData script is executed when a new EC2 instance is launched. If the script is performing time-consuming patching, it will increase the time it takes for the instance to become available. By removing the patching from the script and using a pre-built AMI with the latest patches, the instances will be ready much faster.
upvoted 1 times
...
[Removed]
3 months, 3 weeks ago
Selected Answer: AE
AE Not AC because app deployment from UserData is nonsense. Therefore, you don't need to change anything about deployment
upvoted 1 times
...
acordovam
3 months, 3 weeks ago
Selected Answer: AC
A is obvious to reduce the time of EC2 available. C because codedeploy can deploy the lasted version on scale-out event of ASG https://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-aws-auto-scaling.html
upvoted 4 times
...
tttamtttam
3 months, 3 weeks ago
Selected Answer: BC
I am not 100% confident but vote for B and C
upvoted 1 times
...
Pupina
3 months, 4 weeks ago
Selected Asnwer BC. I agree with eboehm2 B because it is the standard way to update patches. C Because it is necessary to update the app to the last version and B does not do that. You can do that automatically with CodeDeploy. https://docs.aws.amazon.com/codedeploy/latest/userguide/tutorials-auto-scaling-group.html https://docs.aws.amazon.com/codedeploy/latest/userguide/integrations-aws-auto-scaling.html
upvoted 1 times
...
stlim83
4 months ago
Selected Answer: AC
The requirements: The solution must make the most recent version of the application available at all times and must apply all available security updates. B is incorrect because we need the most recent version of the application. this means we have to recreate AMI for every update. D is incorrect because CodePipeline can't deploy anything itself. It is using CodeDeploy for the deployment. E is also incorrect. because it must apply all available security updates. if we delete all commands for the os updates. it can't meet the requirements.
upvoted 5 times
...
Question #48 Topic 1

A developer is creating an AWS Lambda function that needs credentials to connect to an Amazon RDS for MySQL database. An Amazon S3 bucket currently stores the credentials. The developer needs to improve the existing solution by implementing credential rotation and secure storage. The developer also needs to provide integration with the Lambda function.
Which solution should the developer use to store and retrieve the credentials with the LEAST management overhead?

  • A. Store the credentials in AWS Systems Manager Parameter Store. Select the database that the parameter will access. Use the default AWS Key Management Service (AWS KMS) key to encrypt the parameter. Enable automatic rotation for the parameter. Use the parameter from Parameter Store on the Lambda function to connect to the database.
  • B. Encrypt the credentials with the default AWS Key Management Service (AWS KMS) key. Store the credentials as environment variables for the Lambda function. Create a second Lambda function to generate new credentials and to rotate the credentials by updating the environment variables of the first Lambda function. Invoke the second Lambda function by using an Amazon EventBridge rule that runs on a schedule. Update the database to use the new credentials. On the first Lambda function, retrieve the credentials from the environment variables. Decrypt the credentials by using AWS KMS, Connect to the database.
  • C. Store the credentials in AWS Secrets Manager. Set the secret type to Credentials for Amazon RDS database. Select the database that the secret will access. Use the default AWS Key Management Service (AWS KMS) key to encrypt the secret. Enable automatic rotation for the secret. Use the secret from Secrets Manager on the Lambda function to connect to the database.
  • D. Encrypt the credentials by using AWS Key Management Service (AWS KMS). Store the credentials in an Amazon DynamoDB table. Create a second Lambda function to rotate the credentials. Invoke the second Lambda function by using an Amazon EventBridge rule that runs on a schedule. Update the DynamoDB table. Update the database to use the generated credentials. Retrieve the credentials from DynamoDB with the first Lambda function. Connect to the database.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: C
C https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_database_secret.html https://docs.aws.amazon.com/secretsmanager/latest/userguide/retrieving-secrets_lambda.html
upvoted 10 times
jipark
3 months ago
"automatic rotation" "cross region" - Security Manager
upvoted 1 times
...
...
jayvarma
Most Recent 2 months, 4 weeks ago
Option C. Keyword: Implementing credential rotation and secure storage.
upvoted 1 times
...
Mtho96
4 months ago
C This solution minimizes management overhead by leveraging the built-in capabilities of AWS Secrets Manager, such as encryption, automatic rotation, and integration with AWS Lambda. It provides a secure and efficient way to store and retrieve https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_database_secret.html https://docs.aws.amazon.com/secretsmanager/latest/userguide/retrieving-secrets_lambda.html
upvoted 2 times
...
Question #49 Topic 1

A developer has written the following IAM policy to provide access to an Amazon S3 bucket:

Which access does the policy allow regarding the s3:GetObject and s3:PutObject actions?

  • A. Access on all buckets except the “DOC-EXAMPLE-BUCKET” bucket
  • B. Access on all buckets that start with “DOC-EXAMPLE-BUCKET” except the “DOC-EXAMPLE-BUCKET/secrets” bucket
  • C. Access on all objects in the “DOC-EXAMPLE-BUCKET” bucket along with access to all S3 actions for objects in the “DOC-EXAMPLE-BUCKET” bucket that start with “secrets”
  • D. Access on all objects in the “DOC-EXAMPLE-BUCKET” bucket except on objects that start with “secrets”
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: D
D https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-actions.html
upvoted 9 times
...
nmc12
Most Recent 1 month ago
Selected Answer: D
D https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-actions.html
upvoted 1 times
...
Question #50 Topic 1

A developer is creating a mobile app that calls a backend service by using an Amazon API Gateway REST API. For integration testing during the development phase, the developer wants to simulate different backend responses without invoking the backend service.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Create an AWS Lambda function. Use API Gateway proxy integration to return constant HTTP responses.
  • B. Create an Amazon EC2 instance that serves the backend REST API by using an AWS CloudFormation template.
  • C. Customize the API Gateway stage to select a response type based on the request.
  • D. Use a request mapping template to select the mock integration response.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: D
D https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-mock-integration.html
upvoted 11 times
...
Dun6
Highly Voted 7 months, 2 weeks ago
Chatgpt said D
upvoted 6 times
...
Umuntu
Most Recent 1 month ago
D. Use a request mapping template to select the mock integration response. Option D allows you to use a request mapping template in API Gateway to select the mock integration response. This approach allows you to simulate different backend responses without invoking the actual backend service. It provides flexibility and control over the responses without the need for additional AWS resources like Lambda functions or EC2 instances, thus minimizing operational overhead.
upvoted 2 times
...
hsinchang
1 month, 3 weeks ago
without invoking backend service -> mock
upvoted 1 times
...
ninomfr64
2 months, 2 weeks ago
Selected Answer: D
D as per doc https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-mock-integration.html Wording confused me a bit, with mapping template you do not "select" a response, instead you actually craft it in this case
upvoted 1 times
...
KhyatiChhajed
6 months ago
Selected Answer: D
it's D
upvoted 1 times
...
March2023
7 months, 2 weeks ago
Selected Answer: D
I'm going with D as well.
upvoted 4 times
...
Question #51 Topic 1

A developer has a legacy application that is hosted on-premises. Other applications hosted on AWS depend on the on-premises application for proper functioning. In case of any application errors, the developer wants to be able to use Amazon CloudWatch to monitor and troubleshoot all applications from one place.
How can the developer accomplish this?

  • A. Install an AWS SDK on the on-premises server to automatically send logs to CloudWatch.
  • B. Download the CloudWatch agent to the on-premises server. Configure the agent to use IAM user credentials with permissions for CloudWatch.
  • C. Upload log files from the on-premises server to Amazon S3 and have CloudWatch read the files.
  • D. Upload log files from the on-premises server to an Amazon EC2 instance and have the instance forward the logs to CloudWatch.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: B
B https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-on-premise.html
upvoted 9 times
...
Dun6
Highly Voted 7 months, 2 weeks ago
Selected Answer: B
We need cloudwatchagent
upvoted 5 times
...
Baba_Eni
Most Recent 5 months ago
Selected Answer: B
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html
upvoted 2 times
...
Question #52 Topic 1

An Amazon Kinesis Data Firehose delivery stream is receiving customer data that contains personally identifiable information. A developer needs to remove pattern-based customer identifiers from the data and store the modified data in an Amazon S3 bucket.
What should the developer do to meet these requirements?

  • A. Implement Kinesis Data Firehose data transformation as an AWS Lambda function. Configure the function to remove the customer identifiers. Set an Amazon S3 bucket as the destination of the delivery stream.
  • B. Launch an Amazon EC2 instance. Set the EC2 instance as the destination of the delivery stream. Run an application on the EC2 instance to remove the customer identifiers. Store the transformed data in an Amazon S3 bucket.
  • C. Create an Amazon OpenSearch Service instance. Set the OpenSearch Service instance as the destination of the delivery stream. Use search and replace to remove the customer identifiers. Export the data to an Amazon S3 bucket.
  • D. Create an AWS Step Functions workflow to remove the customer identifiers. As the last step in the workflow, store the transformed data in an Amazon S3 bucket. Set the workflow as the destination of the delivery stream.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: A
A https://docs.aws.amazon.com/firehose/latest/dev/data-transformation.html
upvoted 10 times
...
tttamtttam
Most Recent 3 months, 3 weeks ago
Selected Answer: A
It supports custom data transformation using AWS Lambda
upvoted 2 times
...
Question #53 Topic 1

A developer is using an AWS Lambda function to generate avatars for profile pictures that are uploaded to an Amazon S3 bucket. The Lambda function is automatically invoked for profile pictures that are saved under the /original/ S3 prefix. The developer notices that some pictures cause the Lambda function to time out. The developer wants to implement a fallback mechanism by using another Lambda function that resizes the profile picture.
Which solution will meet these requirements with the LEAST development effort?

  • A. Set the image resize Lambda function as a destination of the avatar generator Lambda function for the events that fail processing.
  • B. Create an Amazon Simple Queue Service (Amazon SQS) queue. Set the SQS queue as a destination with an on failure condition for the avatar generator Lambda function. Configure the image resize Lambda function to poll from the SQS queue.
  • C. Create an AWS Step Functions state machine that invokes the avatar generator Lambda function and uses the image resize Lambda function as a fallback. Create an Amazon EventBridge rule that matches events from the S3 bucket to invoke the state machine.
  • D. Create an Amazon Simple Notification Service (Amazon SNS) topic. Set the SNS topic as a destination with an on failure condition for the avatar generator Lambda function. Subscribe the image resize Lambda function to the SNS topic.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
A (59%)
C (20%)
B (20%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
March2023
Highly Voted 7 months, 2 weeks ago
Selected Answer: A
Wouldn't A be the Least Effort
upvoted 11 times
...
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: C
C Before execute the recovery Lambda function, the fallback mechanism must catch the timeout error of the generator Lambda function. https://docs.aws.amazon.com/step-functions/latest/dg/concepts-error-handling.html
upvoted 6 times
...
Jonalb
Most Recent 1 week, 4 days ago
Selected Answer: A
A. Defina a função Lambda de redimensionamento de imagem como um destino da função Lambda do gerador de avatar para os eventos que falham no processamento
upvoted 1 times
...
jingle4944
1 month, 1 week ago
Selected Answer: A
Previously, you needed to write the SQS/SNS/EventBridge handling code within your Lambda function and manage retries and failures yourself. With Destinations, you can route asynchronous function results as an execution record to a destination resource without writing additional code. https://aws.amazon.com/ru/blogs/compute/introducing-aws-lambda-destinations/
upvoted 3 times
...
appuNBablu
1 month, 2 weeks ago
A, because we can map another Lambda function as destination alongside (SQS, SNS, Event Bridge)
upvoted 1 times
...
ninomfr64
2 months, 2 weeks ago
Selected Answer: A
A is the easiest option https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html#invocation-async-destinations
upvoted 1 times
...
jayvarma
2 months, 4 weeks ago
Option B is the right answer. Can someone say why B cannot be the right answer for this question? Option A fails when there are huge amounts of requests coming to the lambda functions. There is every chance for lambda to throw ProvisionedThrougputExceeded Exception because of the throttling issues. Which is almost the similar reason why Option C will also fail at some point. However, you could use SNS but it is not the best solution. Definitely Option B.
upvoted 5 times
...
backfringe
3 months, 1 week ago
Selected Answer: A
least amount of effort to set up destination on failure events to REsize Lambda
upvoted 1 times
...
AWSdeveloper08
3 months, 2 weeks ago
Selected Answer: B
I agree with the explanation for option B. Scalability is the key
upvoted 2 times
...
[Removed]
3 months, 3 weeks ago
Selected Answer: A
A is a simplest solution https://aws.amazon.com/ru/blogs/compute/introducing-aws-lambda-destinations/ https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html#invocation-async-destinations
upvoted 3 times
jipark
3 months, 1 week ago
your explanation looks correct. Lambda "Denstination" seems exact solution for this. it explains how to handle success, failed case.
upvoted 1 times
...
...
umer1998
4 months, 2 weeks ago
I agree with B because I am considering scalability in my mind if we have thousands/millions of requests at the same time. because of the quota limit, the lambda can fail if we continuously call two functions (step function) together, which may result in another function doing a throttling issue. If we pass the message to the SQS, our function will never face this issue with throttling. and since the question asks us to do the least development efforts. Separation of concerns will make development easier.
upvoted 2 times
...
ScherbakovMike
5 months, 1 week ago
SQS or SNS can be assigned as 'TargetArn' in the 'DeadLetterConfig'. I think, D variant is more appropriate: in case of timeout (image is too large), there will be push to SNS and to subscribed resizing function. Subscribed resizing function writes the resized image to S3 and original Lambda function processes the resized image again.
upvoted 1 times
...
rlnd2000
5 months, 2 weeks ago
Selected Answer: B
B is the best option in my opinion, I agree with Nagendhar and junrun3 explanations and because decoupling using SQS is a best practice, I think when they say ... with the LEAST development effort that imply following the best practices in AWS.
upvoted 3 times
...
marijabtw
5 months, 3 weeks ago
Selected Answer: C
The key in the question is "LEAST development effort", which indicates that we should choose step functions.
upvoted 3 times
...
Nagendhar
5 months, 3 weeks ago
Ans: B Option B involves creating an Amazon SQS queue and setting the SQS queue as a destination with an on failure condition for the avatar generator Lambda function. The image resize Lambda function is then configured to poll from the SQS queue. This approach ensures that the image resize Lambda function is invoked in case of a timeout, and using an SQS queue is a common pattern for decoupling services. This approach requires the least development effort because it involves setting up an SQS queue and configuring the Lambda functions to use it, which is a simple process.
upvoted 3 times
...
junrun3
5 months, 3 weeks ago
Selected Answer: B
In case B, the SQS queue can be used to send a message containing a failure condition for the avatar generator Lambda function. The image resize Lambda function can then be configured to poll the SQS queue. This will ensure that the image resize Lambda function is retried as needed, reducing costs.
upvoted 1 times
...
Rpod
6 months, 2 weeks ago
Selected Answer: A
Chatgpt says A
upvoted 2 times
ihebchorfi
6 months, 1 week ago
mine says B
upvoted 4 times
...
...
Question #54 Topic 1

A developer needs to migrate an online retail application to AWS to handle an anticipated increase in traffic. The application currently runs on two servers: one server for the web application and another server for the database. The web server renders webpages and manages session state in memory. The database server hosts a MySQL database that contains order details. When traffic to the application is heavy, the memory usage for the web server approaches 100% and the application slows down considerably.
The developer has found that most of the memory increase and performance decrease is related to the load of managing additional user sessions. For the web server migration, the developer will use Amazon EC2 instances with an Auto Scaling group behind an Application Load Balancer.
Which additional set of changes should the developer make to the application to improve the application's performance?

  • A. Use an EC2 instance to host the MySQL database. Store the session data and the application data in the MySQL database.
  • B. Use Amazon ElastiCache for Memcached to store and manage the session data. Use an Amazon RDS for MySQL DB instance to store the application data.
  • C. Use Amazon ElastiCache for Memcached to store and manage the session data and the application data.
  • D. Use the EC2 instance store to manage the session data. Use an Amazon RDS for MySQL DB instance to store the application data.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
B (95%)
5%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
clarksu
Highly Voted 7 months, 2 weeks ago
Selected Answer: B
Option B , how can you image using an EC2 as cache ....
upvoted 7 times
...
Dun6
Highly Voted 7 months, 2 weeks ago
Selected Answer: B
B it is
upvoted 6 times
...
Aws_aspr
Most Recent 2 months, 3 weeks ago
Selected Answer: B
B is correct
upvoted 1 times
...
nkelesidis
3 months, 3 weeks ago
Selected Answer: A
I choose A. It says that the most of the memory increase is related to the load of managing additional user sessions. So I think Memcached doesn't make sense. Also, isn't bad practice to store session information in db.
upvoted 1 times
ninomfr64
2 months, 2 weeks ago
Session Store is one of the main use case for ElastiCache for Memcached as pwe AWS website https://aws.amazon.com/elasticache/memcached/#:~:text=ElastiCache%20for%20Memcached.-,Session%20Store,-Session%20stores%20are
upvoted 3 times
...
...
Untamables
7 months, 2 weeks ago
Selected Answer: B
B Session stores are easy to create with Amazon ElastiCache for Memcached. https://aws.amazon.com/elasticache/memcached/ With Amazon RDS, you can deploy scalable MySQL servers in minutes with cost-efficient and resizable hardware capacity. https://aws.amazon.com/rds/mysql/
upvoted 5 times
...
Question #55 Topic 1

An application uses Lambda functions to extract metadata from files uploaded to an S3 bucket; the metadata is stored in Amazon DynamoDB. The application starts behaving unexpectedly, and the developer wants to examine the logs of the Lambda function code for errors.
Based on this system configuration, where would the developer find the logs?

  • A. Amazon S3
  • B. AWS CloudTrail
  • C. Amazon CloudWatch
  • D. Amazon DynamoDB
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: C
C https://docs.aws.amazon.com/prescriptive-guidance/latest/implementing-logging-monitoring-cloudwatch/lambda-logging-metrics.html
upvoted 6 times
...
AhmedAliHashmi
Most Recent 2 months, 1 week ago
Answer is C
upvoted 1 times
...
Question #56 Topic 1

A company is using an AWS Lambda function to process records from an Amazon Kinesis data stream. The company recently observed slow processing of the records. A developer notices that the iterator age metric for the function is increasing and that the Lambda run duration is constantly above normal.
Which actions should the developer take to increase the processing speed? (Choose two.)

  • A. Increase the number of shards of the Kinesis data stream.
  • B. Decrease the timeout of the Lambda function.
  • C. Increase the memory that is allocated to the Lambda function.
  • D. Decrease the number of shards of the Kinesis data stream.
  • E. Increase the timeout of the Lambda function.
Reveal Solution Hide Solution

Correct Answer: AC 🗳️

Community vote distribution
AC (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: AC
A and C https://repost.aws/knowledge-center/lambda-iterator-age
upvoted 11 times
...
Question #57 Topic 1

A company needs to harden its container images before the images are in a running state. The company's application uses Amazon Elastic Container Registry (Amazon ECR) as an image registry. Amazon Elastic Kubernetes Service (Amazon EKS) for compute, and an AWS CodePipeline pipeline that orchestrates a continuous integration and continuous delivery (CI/CD) workflow.
Dynamic application security testing occurs in the final stage of the pipeline after a new image is deployed to a development namespace in the EKS cluster. A developer needs to place an analysis stage before this deployment to analyze the container image earlier in the CI/CD pipeline.
Which solution will meet these requirements with the MOST operational efficiency?

  • A. Build the container image and run the docker scan command locally. Mitigate any findings before pushing changes to the source code repository. Write a pre-commit hook that enforces the use of this workflow before commit.
  • B. Create a new CodePipeline stage that occurs after the container image is built. Configure ECR basic image scanning to scan on image push. Use an AWS Lambda function as the action provider. Configure the Lambda function to check the scan results and to fail the pipeline if there are findings.
  • C. Create a new CodePipeline stage that occurs after source code has been retrieved from its repository. Run a security scanner on the latest revision of the source code. Fail the pipeline if there are findings.
  • D. Add an action to the deployment stage of the pipeline so that the action occurs before the deployment to the EKS cluster. Configure ECR basic image scanning to scan on image push. Use an AWS Lambda function as the action provider. Configure the Lambda function to check the scan results and to fail the pipeline if there are findings.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
B (79%)
D (21%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: B
B https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-scanning-basic.html The below blog post refers to the solution using Amazon Inspector and ECS, but the architecture is almost same as required in this scenario. The built in image scanning in Amazon ECR provides a simpler solution. https://aws.amazon.com/blogs/security/use-amazon-inspector-to-manage-your-build-and-deploy-pipelines-for-containerized-applications/
upvoted 12 times
...
love777
Highly Voted 2 months, 1 week ago
Selected Answer: B
This approach integrates security scanning directly into the CI/CD pipeline and leverages AWS services for image scanning. Here's how it works: A new CodePipeline stage is added after the container image is built, but before it's pushed to Amazon ECR. ECR basic image scanning is configured to scan the image automatically upon push. This ensures that security scanning is part of the process. An AWS Lambda function is used as an action provider in the pipeline. This Lambda function can be configured to analyze the scan results of the image. If the Lambda function detects any security findings in the scan results, it can fail the pipeline, preventing the deployment of images with security vulnerabilities.
upvoted 5 times
...
ninomfr64
Most Recent 2 months, 2 weeks ago
Selected Answer: B
B as per https://docs.aws.amazon.com/amplify/latest/userguide/running-tests.html You can run end-to-end (E2E) tests in the test phase of your Amplify app to catch regressions before pushing code to production. The test phase can be configured in the build specification YAML. Currently, you can run only the Cypress testing framework during a build. build specification is provided in the amplify.yml file
upvoted 1 times
...
imvb88
5 months, 2 weeks ago
Selected Answer: D
So it narrows down to option B and D which using ECR basic image scanning. B: create a stage D: add an action to the existing stage I'd go with D since executing an additional action will be faster than executing a whole stage.
upvoted 3 times
Toby_S
5 months ago
The question states "A developer needs to place an analysis stage" therefore I'd go with B.
upvoted 2 times
...
...
Rpod
6 months, 2 weeks ago
Selected Answer: D
Chat GPT says D
upvoted 3 times
Umman
3 months, 1 week ago
ChatGPT says option B
upvoted 1 times
...
...
MrTee
6 months, 3 weeks ago
Selected Answer: B
The developer should choose option B. Create a new CodePipeline stage that occurs after the container image is built. Configure ECR basic image scanning to scan on image push. Use an AWS Lambda function as the action provider. Configure the Lambda function to check the scan results and to fail the pipeline if there are findings. This will allow the developer to place an analysis stage before deployment to analyze the container image earlier in the CI/CD pipeline with the most operational efficiency. CHATGPT
upvoted 5 times
...
Question #58 Topic 1

A developer is testing a new file storage application that uses an Amazon CloudFront distribution to serve content from an Amazon S3 bucket. The distribution accesses the S3 bucket by using an origin access identity (OAI). The S3 bucket's permissions explicitly deny access to all other users.
The application prompts users to authenticate on a login page and then uses signed cookies to allow users to access their personal storage directories. The developer has configured the distribution to use its default cache behavior with restricted viewer access and has set the origin to point to the S3 bucket. However, when the developer tries to navigate to the login page, the developer receives a 403 Forbidden error.
The developer needs to implement a solution to allow unauthenticated access to the login page. The solution also must keep all private content secure.
Which solution will meet these requirements?

  • A. Add a second cache behavior to the distribution with the same origin as the default cache behavior. Set the path pattern for the second cache behavior to the path of the login page, and make viewer access unrestricted. Keep the default cache behavior's settings unchanged.
  • B. Add a second cache behavior to the distribution with the same origin as the default cache behavior. Set the path pattern for the second cache behavior to *, and make viewer access restricted. Change the default cache behavior's path pattern to the path of the login page, and make viewer access unrestricted.
  • C. Add a second origin as a failover origin to the default cache behavior. Point the failover origin to the S3 bucket. Set the path pattern for the primary origin to *, and make viewer access restricted. Set the path pattern for the failover origin to the path of the login page, and make viewer access unrestricted.
  • D. Add a bucket policy to the S3 bucket to allow read access. Set the resource on the policy to the Amazon Resource Name (ARN) of the login page object in the S3 bucket. Add a CloudFront function to the default cache behavior to redirect unauthorized requests to the login page's S3 URL.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: A
A If you create additional cache behaviors, the default cache behavior is always the last to be processed. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesCacheBehavior
upvoted 8 times
...
ninomfr64
Most Recent 2 months, 2 weeks ago
Selected Answer: A
B) you cannot override the path pattern of the default Cache behavior C) the origin failover is used when the primary origin is not available, this is not our case D) with this configuration I think users wil get 403 Forbidden error and then redirected to the login page's S3 URL A is a workable approach in my opinion
upvoted 1 times
...
Harddiver
4 months, 4 weeks ago
Should it be D? In case s3 bucket restricts permissions, those should be open for login.
upvoted 3 times
...
MrTee
6 months, 2 weeks ago
Selected Answer: A
By adding a second cache behavior with unrestricted viewer access to the login page's path pattern, unauthenticated users will be allowed to access the login page. At the same time, the default cache behavior's settings remain unchanged, and private content remains secure because it still requires signed cookies for access.
upvoted 3 times
...
Question #59 Topic 1

A developer is using AWS Amplify Hosting to build and deploy an application. The developer is receiving an increased number of bug reports from users. The developer wants to add end-to-end testing to the application to eliminate as many bugs as possible before the bugs reach production.
Which solution should the developer implement to meet these requirements?

  • A. Run the amplify add test command in the Amplify CLI.
  • B. Create unit tests in the application. Deploy the unit tests by using the amplify push command in the Amplify CLI.
  • C. Add a test phase to the amplify.yml build settings for the application.
  • D. Add a test phase to the aws-exports.js file for the application.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (81%)
B (19%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
gpt_test
Highly Voted 7 months, 1 week ago
Selected Answer: C
Explanation: Adding a test phase to the amplify.yml build settings allows the developer to define and execute end-to-end tests as part of the build and deployment process in AWS Amplify Hosting. This will help ensure that bugs are caught and fixed before the application reaches production, improving the overall quality of the application.
upvoted 8 times
...
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: C
C https://docs.aws.amazon.com/amplify/latest/userguide/running-tests.html
upvoted 5 times
jipark
3 months, 1 week ago
ton of thanks !! document commented 'End to End Test'
upvoted 1 times
...
...
ninomfr64
Most Recent 2 months, 2 weeks ago
Selected Answer: B
B as per https://docs.aws.amazon.com/amplify/latest/userguide/running-tests.html You can run end-to-end (E2E) tests in the test phase of your Amplify app to catch regressions before pushing code to production. The test phase can be configured in the build specification YAML. Currently, you can run only the Cypress testing framework during a build. build specification is provided in the amplify.yml file
upvoted 1 times
...
SachinR28
3 months, 2 weeks ago
Selected Answer: B
I'LL GO WITH B
upvoted 1 times
...
rlnd2000
5 months, 2 weeks ago
Selected Answer: B
We can use amplify.yml file to run any test commands at build time. Since the test must run while the program is being deployed (E2E) I'll go with B.
upvoted 1 times
...
Question #60 Topic 1

An ecommerce company is using an AWS Lambda function behind Amazon API Gateway as its application tier. To process orders during checkout, the application calls a POST API from the frontend. The POST API invokes the Lambda function asynchronously. In rare situations, the application has not processed orders. The Lambda application logs show no errors or failures.
What should a developer do to solve this problem?

  • A. Inspect the frontend logs for API failures. Call the POST API manually by using the requests from the log file.
  • B. Create and inspect the Lambda dead-letter queue. Troubleshoot the failed functions. Reprocess the events.
  • C. Inspect the Lambda logs in Amazon CloudWatch for possible errors. Fix the errors.
  • D. Make sure that caching is disabled for the POST API in API Gateway.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (54%)
A (29%)
Other

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
gpt_test
Highly Voted 7 months, 1 week ago
Selected Answer: B
Explanation: By configuring a dead-letter queue (DLQ) for the Lambda function, you can capture asynchronous invocation events that were not successfully processed. This allows you to troubleshoot the failed functions and reprocess the events, ensuring that orders are not missed. The DLQ will hold information about the failed events, allowing you to analyze and resolve the issue.
upvoted 8 times
rlnd2000
6 months ago
as you said "... events that were not successfully processed." but there is not failure in Lambda log, so the lambda was not invoked by the POST API event. B is id not the answer.
upvoted 2 times
kavi00203
4 months, 3 weeks ago
Its an asynchronous invocation events, that's y there is no log. Because in asynchronous its not mandatory to get the result after invocation events.
upvoted 2 times
TeeTheMan
3 months, 1 week ago
Asynchronous invocation means that the caller of the lambda does not wait for a response. The type of invocation has no effect on the lambda having logs or not. I picked A, because the lambda not having logs suggests something’s gone wrong upstream of the lambda.
upvoted 2 times
...
...
...
...
Untamables
Highly Voted 7 months, 2 weeks ago
Selected Answer: A
A The Lambda function might have not been called since the Lambda logs show no errors or failures. The cause might be that the frontend application does not call the API or an error occurs in the API Gateway processing.
upvoted 7 times
...
Jonalb
Most Recent 1 week, 4 days ago
Selected Answer: B
B. Crie e inspecione a fila de mensagens mortas do Lambda. Solucione os problemas das funções com falha. Reprocesse os eventos. Mais Votados
upvoted 1 times
...
mr_swal
3 weeks, 4 days ago
Selected Answer: A
The Lambda application logs show no errors or failures. - So Lambda function was not invoked at all
upvoted 1 times
daicoso
3 weeks, 1 day ago
if the application code doesn't log errors and doesn't throw exceptions, no error or failure will be logged
upvoted 1 times
...
...
nmc12
1 month ago
Selected Answer: B
The Lambda Dead Letter Queue is a feature that helps in troubleshooting events that failed processing by a Lambda function. When an asynchronous invocation of a Lambda function fails, AWS Lambda can direct the failed event to an Amazon SNS topic or an Amazon SQS queue (the dead-letter queue), where the event is stored and can be analyzed or reprocessed.
upvoted 1 times
...
norris81
1 month, 2 weeks ago
Selected Answer: C
I don't like B which has reprocess the errors, which will make a whole load of errors be process creating orders which could be months old
upvoted 2 times
...
misa27
1 month, 3 weeks ago
Selected Answer: B
B https://aws.amazon.com/what-is/dead-letter-queue/
upvoted 1 times
...
ninomfr64
2 months, 2 weeks ago
Selected Answer: B
A) asynchronous invocations doe not return result to the caller, thus I do not expect errors in frontend log C) the scenario question rules out the option to have error messages in the Lambda log D) I do not see how caching can have impact in this scenario B) having a dead-letter queue is a viable option to troubleshoot asynchronous lambda invocation error, another option would be using Destination
upvoted 1 times
...
backfringe
3 months, 1 week ago
Selected Answer: C
Option C is the appropriate choice because it involves inspecting the Lambda logs in Amazon CloudWatch to identify any potential issues or errors that might be causing the orders not to be processed Option B is not the most appropriate choice because the dead-letter queue is generally used to capture events that cannot be processed by a Lambda function. In this scenario, it seems that the Lambda function is executing without apparent errors. Thus, the issue might not be related to dead-letter queue failures.
upvoted 2 times
...
redfivedog
3 months, 1 week ago
Selected Answer: D
I think D should be the correct answer to this question. The logs have no indications of errors or failed events, so if some transactions are not being processed, that probably means that the lambda function wasn't invoked for those calls. One reason could be that caching is enabled in API gateway for the POST request, so the lambda function isn't triggered for any cache hits. - A is not correct as the frontend would be getting 202s for all asynchronous post requests. - B is not correct because lambda logs have no errors => no lambda execution errors => DLQ won't get any requests of interest if we enable it. A comment below mentioned that asynchronous lambda invocations don't generate logs, but that is not true. - C is obviously incorrect. The premise explicitly mentions that there aren't any errors in the logs.
upvoted 2 times
...
gomurali
4 months, 1 week ago
https://aws.amazon.com/about-aws/whats-new/2016/12/aws-lambda-supports-dead-letter-queues/
upvoted 1 times
...
csG13
5 months ago
Selected Answer: B
It's B. Apparently C & D are wrong. Also it's not A because the call is async. Meaning that the response code from the lambda service is 202. Since generally frontend can make POST requests, the problem should be visible somewhere in the backed. Dead-letter queues are for debugging and further analysis. Hence should be B.
upvoted 3 times
rn5357
1 month, 3 weeks ago
How can you tell from this context that the POST API call was successful?
upvoted 1 times
...
...
Nagendhar
5 months, 4 weeks ago
Ans: B B. Create and inspect the Lambda dead-letter queue. Troubleshoot the failed functions. Reprocess the events. Since the Lambda application logs show no errors or failures, it is possible that the asynchronous invocation is not being processed successfully. In this case, the best solution would be to inspect the Lambda dead-letter queue, which stores failed asynchronous invocations. By doing this, the developer can troubleshoot any failed functions and reprocess the events.
upvoted 3 times
...
clarksu
7 months, 2 weeks ago
Selected Answer: A
B is wrong, if send to DLQ, there should be failed and try logs for lambda before sending to DLQ
upvoted 2 times
...
Dun6
7 months, 2 weeks ago
Selected Answer: B
Use DLQ
upvoted 4 times
...
Question #61 Topic 1

A company is building a web application on AWS. When a customer sends a request, the application will generate reports and then make the reports available to the customer within one hour. Reports should be accessible to the customer for 8 hours. Some reports are larger than 1 MB. Each report is unique to the customer. The application should delete all reports that are older than 2 days.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Generate the reports and then store the reports as Amazon DynamoDB items that have a specified TTL. Generate a URL that retrieves the reports from DynamoDB. Provide the URL to customers through the web application.
  • B. Generate the reports and then store the reports in an Amazon S3 bucket that uses server-side encryption. Attach the reports to an Amazon Simple Notification Service (Amazon SNS) message. Subscribe the customer to email notifications from Amazon SNS.
  • C. Generate the reports and then store the reports in an Amazon S3 bucket that uses server-side encryption. Generate a presigned URL that contains an expiration date Provide the URL to customers through the web application. Add S3 Lifecycle configuration rules to the S3 bucket to delete old reports.
  • D. Generate the reports and then store the reports in an Amazon RDS database with a date stamp. Generate an URL that retrieves the reports from the RDS database. Provide the URL to customers through the web application. Schedule an hourly AWS Lambda function to delete database records that have expired date stamps.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
March2023
Highly Voted 7 months, 2 weeks ago
Selected Answer: C
Presigned URL
upvoted 8 times
...
gpt_test
Highly Voted 7 months, 1 week ago
Selected Answer: C
Explanation: Storing the reports in an Amazon S3 bucket provides a cost-effective and scalable solution for handling files larger than 1 MB. Server-side encryption ensures data security. Generating a presigned URL with an expiration date allows the customer to access the report for 8 hours, and S3 Lifecycle configuration rules automatically delete the reports older than 2 days, reducing operational overhead.
upvoted 6 times
...
ninomfr64
Most Recent 2 months, 2 weeks ago
A) DynamoDB cannot store object larger than 400K B) SNS cannot send email with attachment - https://repost.aws/questions/QUOvaKJVb3QzOqVENONBZUag/sns-send-file-attachment D) the nature or format of the report is not specified, however RDS doent look like a great place to store large document file. Also generating a url to the reports from the RDS database requires some work while it is a native capabilities in S3 C) is a workable solution as S3 is designed to store file objects, it allows to easily generate pre-signed url, and provide lifecycle management rule that allows to expire objects
upvoted 2 times
...
imvb88
5 months, 2 weeks ago
Selected Answer: C
Dynamo DB cannot store object > 400KB -> option A is out immediately. Limited access to S3 calls for presigned URL which is option C. C also has lifecycle config to delete old object while B does not have that. D is possible but too much effort compared to design pattern in C.
upvoted 5 times
...
Untamables
7 months, 2 weeks ago
Selected Answer: C
C https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-presigned-url.html https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html
upvoted 4 times
...
Question #62 Topic 1

A company has deployed an application on AWS Elastic Beanstalk. The company has configured the Auto Scaling group that is associated with the Elastic Beanstalk environment to have five Amazon EC2 instances. If the capacity is fewer than four EC2 instances during the deployment, application performance degrades. The company is using the all-at-once deployment policy.
What is the MOST cost-effective way to solve the deployment issue?

  • A. Change the Auto Scaling group to six desired instances.
  • B. Change the deployment policy to traffic splitting. Specify an evaluation time of 1 hour.
  • C. Change the deployment policy to rolling with additional batch. Specify a batch size of 1.
  • D. Change the deployment policy to rolling. Specify a batch size of 2.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
gpt_test
Highly Voted 7 months, 1 week ago
Selected Answer: C
Explanation: The rolling with additional batch deployment policy allows Elastic Beanstalk to launch additional instances in a new batch before terminating the old instances. In this case, specifying a batch size of 1 means that Elastic Beanstalk will deploy the application updates to 1 new instance at a time, ensuring that there are always at least 4 instances available during the deployment process. This method maintains application performance while minimizing the additional cost.
upvoted 12 times
...
gagol14
Highly Voted 4 months, 2 weeks ago
Selected Answer: C
1. Rolling with additional batch deployment: This type of deployment maintains full capacity while new application versions are deployed. It launches a new batch of instances with the new application version, and if the new batch is healthy, it terminates a batch of instances running the old application version. 2. Batch size of 1: This will ensure that one new instance is launched with the new version of the application. Once it is deemed healthy, one of the old instances will be terminated. This will continue until all instances are running the new version, ensuring the capacity is never less than four instances. This approach will add only a minimal additional cost for the temporary overlapping instances during deployment.
upvoted 5 times
...
quangphungdev218
Most Recent 3 months, 1 week ago
Selected Answer: C
The correct answer is: C
upvoted 1 times
...
Prem28
5 months ago
The correct answer is: D. Change the deployment policy to rolling. Specify a batch size of 2. A rolling deployment policy will deploy the new application version to one batch of instances at a time, while the other batches continue to serve traffic. This ensures that the application always has at least four instances available during the deployment. Specifying a batch size of 2 means that two instances will be deployed at a time. This is the most cost-effective option because it minimizes the number of instances that are needed to maintain application performance during the deployment. The other options are not as cost-effective because they require more instances to be running during the deployment. Option A requires six instances, option B requires at least five instances, and option C requires at least four instances.
upvoted 1 times
nmc12
1 month ago
If batch size of 1: During the time the new instances are being deployed and are not yet in service, there are only 5 - 2 = 3 old instances available to serve the traffic, which violates the requirement to maintain at least 4 instances to avoid performance degradation. so, i go with A answer.
upvoted 1 times
...
gagol14
4 months, 2 weeks ago
The rolling deployment policy updates a few instances at a time, but unlike the "rolling with additional batch" option, it does not launch new instances before terminating the old ones. Therefore, capacity could drop below four during deployment, affecting application performance.
upvoted 2 times
jipark
3 months, 1 week ago
C: cost 1 additional EC2 D : degrade performance it looks exam gave key "2 batch" meaning - do not choose this answer.
upvoted 1 times
...
...
...
Untamables
7 months, 2 weeks ago
Selected Answer: C
C https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version-deploy.html
upvoted 3 times
...
Question #63 Topic 1

A developer is incorporating AWS X-Ray into an application that handles personal identifiable information (PII). The application is hosted on Amazon EC2 instances. The application trace messages include encrypted PII and go to Amazon CloudWatch. The developer needs to ensure that no PII goes outside of the EC2 instances.
Which solution will meet these requirements?

  • A. Manually instrument the X-Ray SDK in the application code.
  • B. Use the X-Ray auto-instrumentation agent.
  • C. Use Amazon Macie to detect and hide PII. Call the X-Ray API from AWS Lambda.
  • D. Use AWS Distro for Open Telemetry.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
A (94%)
6%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
gpt_test
Highly Voted 7 months, 1 week ago
Selected Answer: A
Explanation: By manually instrumenting the X-Ray SDK in the application code, the developer can have full control over which data is included in the trace messages. This way, the developer can ensure that no PII is sent to X-Ray by carefully handling the PII within the application and not including it in the trace messages.
upvoted 10 times
...
love777
Most Recent 2 months, 1 week ago
Selected Answer: B
The X-Ray auto-instrumentation agent is designed to automatically trace and collect data from AWS resources and services without requiring manual instrumentation in your application code. It helps ensure that sensitive information, such as PII, remains within the EC2 instances by not transmitting the data outside explicitly. The agent focuses on tracing the application behavior and performance without directly sending PII to external services. This solution is suitable for ensuring compliance and data security while still benefiting from X-Ray's tracing and insights.
upvoted 1 times
...
r3mo
3 months, 1 week ago
Option "B" : Because. Avoids human error.
upvoted 1 times
...
Umman
3 months, 1 week ago
Using the X-Ray auto-instrumentation agent (Option B) is the best choice in this scenario because it will automatically instrument the application without requiring any manual code changes. Additionally, when using X-Ray with auto-instrumentation, you can control the sampling rate to ensure that only a subset of trace data (and encrypted PII) is sent to X-Ray and CloudWatch, reducing the risk of sensitive data being exposed outside of the instances.
upvoted 2 times
...
jasper_pigeon
3 months, 2 weeks ago
For non-Java applications running on EC2 instances, you will need to use the appropriate X-Ray SDKs to manually instrument the application code. You can't use auto-agent
upvoted 1 times
...
kris_jec
3 months, 2 weeks ago
Its very clear from Macie definition that it also provides automated protection as well apart from findings the PII data
upvoted 1 times
...
tttamtttam
3 months, 3 weeks ago
Selected Answer: A
I think B is incorrect as the auto instrument cannot hide it, right?
upvoted 1 times
...
dan80
6 months, 2 weeks ago
Selected Answer: A
C is wrong, Amazon Macie discover PII but dont hide it
upvoted 2 times
...
Untamables
7 months, 2 weeks ago
Selected Answer: A
A Not to send any PII to AWS X-Ray service, add instrumentation code in your application at each location to send trace information that PII is eliminated. https://docs.aws.amazon.com/xray/latest/devguide/xray-instrumenting-your-app.html
upvoted 4 times
...
macross
7 months, 2 weeks ago
c https://docs.aws.amazon.com/macie/latest/user/data-classification.html
upvoted 1 times
...
StarLoard
7 months, 2 weeks ago
C : Amazon Macie is a data security service that discovers sensitive data using machine learning and pattern matching, provides visibility into data security risks, and enables you to automate protection against those risks. https://aws.amazon.com/macie/features/?nc1=h_ls
upvoted 3 times
jipark
3 months, 1 week ago
exactly sayed there.
upvoted 2 times
ninomfr64
2 months, 2 weeks ago
It is my understanding that Macie only supports S3
upvoted 1 times
...
...
...
Question #64 Topic 1

A developer is migrating some features from a legacy monolithic application to use AWS Lambda functions instead. The application currently stores data in an Amazon Aurora DB cluster that runs in private subnets in a VPC. The AWS account has one VPC deployed. The Lambda functions and the DB cluster are deployed in the same AWS Region in the same AWS account.
The developer needs to ensure that the Lambda functions can securely access the DB cluster without crossing the public internet.
Which solution will meet these requirements?

  • A. Configure the DB cluster's public access setting to Yes.
  • B. Configure an Amazon RDS database proxy for he Lambda functions.
  • C. Configure a NAT gateway and a security group for the Lambda functions.
  • D. Configure the VPC, subnets, and a security group for the Lambda functions.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (88%)
13%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
jayvarma
Highly Voted 2 months, 4 weeks ago
Option D is the right answer. When we want the lambda to privately access the DB cluster instead of moving the traffic over the public internet, we need to have the lambda and db cluster to be in the same VPC. When we configure the VPC, subnets, and a security group for the lambda function, the lambda function will be able to communicate with the db cluster using the private IPs that are associated to the VPC. NAT gateway comes into use when you have the lambda deployed in a private subnet and you would want to provide internet access to it.
upvoted 6 times
...
gpt_test
Highly Voted 7 months, 1 week ago
Selected Answer: D
Explanation: To securely access the Amazon Aurora DB cluster without crossing the public internet, the Lambda functions need to be configured to run within the same VPC as the DB cluster. This involves configuring the VPC, subnets, and a security group for the Lambda functions. This setup ensures that the Lambda functions can communicate with the DB cluster using private IP addresses within the VPC.
upvoted 6 times
...
alex_heavy
Most Recent 1 month ago
Selected Answer: B https://www.udemy.com/course/aws-certified-developer-associate-dva-c01/learn/lecture/36527788#overview https://aws.amazon.com/ru/blogs/compute/using-amazon-rds-proxy-with-aws-lambda/
upvoted 1 times
...
eberhe900
4 months ago
Selected Answer: C
https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html
upvoted 2 times
...
Untamables
7 months, 2 weeks ago
Selected Answer: D
D https://docs.aws.amazon.com/lambda/latest/dg/foundation-networking.html
upvoted 4 times
...
Dun6
7 months, 2 weeks ago
Selected Answer: D
D is correct, NATGateway is for when we want Lambda to access the public when it is in a private VPC
upvoted 4 times
...
Question #65 Topic 1

A developer is building a new application on AWS. The application uses an AWS Lambda function that retrieves information from an Amazon DynamoDB table. The developer hard coded the DynamoDB table name into the Lambda function code. The table name might change over time. The developer does not want to modify the Lambda code if the table name changes.
Which solution will meet these requirements MOST efficiently?

  • A. Create a Lambda environment variable to store the table name. Use the standard method for the programming language to retrieve the variable.
  • B. Store the table name in a file. Store the file in the /tmp folder. Use the SDK for the programming language to retrieve the table name.
  • C. Create a file to store the table name. Zip the file and upload the file to the Lambda layer. Use the SDK for the programming language to retrieve the table name.
  • D. Create a global variable that is outside the handler in the Lambda function to store the table name.
Reveal Solution Hide Solution

Correct Answer: C -
🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Dun6
Highly Voted 7 months, 2 weeks ago
Selected Answer: A
You need to use environment variables
upvoted 7 times
...
eberhe900
Most Recent 4 months ago
Selected Answer: A
You can use environment variables to adjust your function's behavior without updating code. An environment variable is a pair of strings that is stored in a function's version-specific configuration. The Lambda runtime makes environment variables available to your code and sets additional environment variables that contain information about the function and invocation request.
upvoted 2 times
...
gpt_test
7 months, 1 week ago
Selected Answer: A
Explanation: Using Lambda environment variables allows you to store configuration information separate from your code, which makes it easy to update the table name without changing the Lambda function code. AWS Lambda provides built-in support for environment variables, making it the most efficient solution.
upvoted 4 times
...
Untamables
7 months, 2 weeks ago
Selected Answer: A
A https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html
upvoted 4 times
...
Question #66 Topic 1

A company has a critical application on AWS. The application exposes an HTTP API by using Amazon API Gateway. The API is integrated with an AWS Lambda function. The application stores data in an Amazon RDS for MySQL DB instance with 2 virtual CPUs (vCPUs) and 64 GB of RAM.

Customers have reported that some of the API calls return HTTP 500 Internal Server Error responses. Amazon CloudWatch Logs shows errors for “too many connections.” The errors occur during peak usage times that are unpredictable.

The company needs to make the application resilient. The database cannot be down outside of scheduled maintenance hours.

Which solution will meet these requirements?

  • A. Decrease the number of vCPUs for the DB instance. Increase the max_connections setting.
  • B. Use Amazon RDS Proxy to create a proxy that connects to the DB instance. Update the Lambda function to connect to the proxy.
  • C. Add a CloudWatch alarm that changes the DB instance class when the number of connections increases to more than 1,000.
  • D. Add an Amazon EventBridge rule that increases the max_connections setting of the DB instance when CPU utilization is above 75%.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (87%)
13%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: B
The best solution to meet these requirements would be to use Amazon RDS Proxy to create a proxy that connects to the DB instance and update the Lambda function to connect to the proxy.
upvoted 7 times
...
hsinchang
Most Recent 1 month, 3 weeks ago
Selected Answer: B
B: RDS Proxy establishes and manages the necessary connection pools to your database so that your Lambda function creates fewer database connections¹. RDS Proxy also handles failovers and retries automatically, which improves the availability of your application. A will reduce the performance and capacity of the database. C may incur additional charges for scaling up the DB instance. It may also cause downtime during the scaling process, which violates the requirement that the database cannot be down outside of scheduled maintenance hours. D may not react fast enough to handle unpredictable peak usage times. It may also cause memory issues if the max_connections setting is too high.
upvoted 1 times
...
love777
2 months, 1 week ago
Selected Answer: B
Adding an Amazon EventBridge rule to increase the max_connections setting based on CPU utilization is not directly addressing the issue of too many connections. Additionally, focusing solely on CPU utilization might not be the best metric for handling connection-related issues.
upvoted 2 times
...
tttamtttam
3 months, 3 weeks ago
Selected Answer: B
I think D is incorrect because it increases the number of connections based on the CPU consumption not the number of connections.
upvoted 1 times
...
Naj_64
3 months, 4 weeks ago
Selected Answer: D
https://repost.aws/knowledge-center/rds-mysql-max-connections
upvoted 1 times
...
csG13
5 months ago
Selected Answer: B
It’s B. RDS proxy can handle many open connections to the database.
upvoted 2 times
...
awsdummie
5 months, 1 week ago
Selected Answer: D
There should not be any downtime. Create an Event bridge rule to update the max_connections parameter in Parameter group of DB instance.
upvoted 1 times
...
Question #67 Topic 1

A company has installed smart meters in all its customer locations. The smart meters measure power usage at 1-minute intervals and send the usage readings to a remote endpoint for collection. The company needs to create an endpoint that will receive the smart meter readings and store the readings in a database. The company wants to store the location ID and timestamp information.

The company wants to give its customers low-latency access to their current usage and historical usage on demand. The company expects demand to increase significantly. The solution must not impact performance or include downtime while scaling.

Which solution will meet these requirements MOST cost-effectively?

  • A. Store the smart meter readings in an Amazon RDS database. Create an index on the location ID and timestamp columns. Use the columns to filter on the customers' data.
  • B. Store the smart meter readings in an Amazon DynamoDB table. Create a composite key by using the location ID and timestamp columns. Use the columns to filter on the customers' data.
  • C. Store the smart meter readings in Amazon ElastiCache for Redis. Create a SortedSet key by using the location ID and timestamp columns. Use the columns to filter on the customers' data.
  • D. Store the smart meter readings in Amazon S3. Partition the data by using the location ID and timestamp columns. Use Amazon Athena to filter on the customers' data.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Gold07
2 weeks, 4 days ago
C is the right answer
upvoted 2 times
...
zoro_chi
1 month ago
Selected Answer: B
Can Someone please explain why A isnt viable? Thanks
upvoted 3 times
...
Naj_64
3 months, 1 week ago
Selected Answer: B
Going with B. DynamoDB is the most cost-effective solution.
upvoted 3 times
...
jasper_pigeon
3 months, 2 weeks ago
You need to use Athena as well to do partitoning
upvoted 2 times
...
HuiHsin
5 months ago
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-sort-keys.html
upvoted 1 times
...
MrTee
6 months, 2 weeks ago
Selected Answer: B
The most cost-effective solution to meet these requirements would be to store the smart meter readings in an Amazon DynamoDB table and create a composite key using the location ID and timestamp columns
upvoted 4 times
...
Question #68 Topic 1

A company is building a serverless application that uses AWS Lambda functions. The company needs to create a set of test events to test Lambda functions in a development environment. The test events will be created once and then will be used by all the developers in an IAM developer group. The test events must be editable by any of the IAM users in the IAM developer group.

Which solution will meet these requirements?

  • A. Create and store the test events in Amazon S3 as JSON objects. Allow S3 bucket access to all IAM users.
  • B. Create the test events. Configure the event sharing settings to make the test events shareable.
  • C. Create and store the test events in Amazon DynamoDB. Allow access to DynamoDB by using IAM roles.
  • D. Create the test events. Configure the event sharing settings to make the test events private.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (81%)
Other

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
renekton
Highly Voted 6 months ago
Selected Answer: B
Under the "Test" tab there's an option. (Shareable) This event is available to IAM users within the same account who have permissions to access and use shareable events. You can check this by yourself on the Lambda Also, here's a documentation https://docs.aws.amazon.com/lambda/latest/dg/testing-functions.html#creating-shareable-events
upvoted 17 times
...
delak
Highly Voted 5 months, 2 weeks ago
Selected Answer: B
Since March of this year, this is now possible to share test events within the same account with different users.
upvoted 5 times
...
Jonalb
Most Recent 1 week, 4 days ago
Selected Answer: B
No AWS Lambda, você pode criar eventos de teste no console da AWS para invocar sua função e ver a resposta. Esses eventos de teste podem ser salvos e compartilhados com outros usuários IAM. Ao definir as configurações de compartilhamento de eventos para tornar os eventos de teste compartilháveis, você permite que todos os desenvolvedores do grupo de desenvolvedores IAM os usem e editem.
upvoted 1 times
...
DUBERS
3 months, 1 week ago
Would this not be C just because that's the only one that has the added security of the IAM roles?
upvoted 1 times
...
Cloud_Cloud
6 months, 2 weeks ago
Selected Answer: B
there is an option in lambda console to share the event with other users
upvoted 1 times
...
MrTee
6 months, 2 weeks ago
Selected Answer: A
I meant to select A
upvoted 3 times
...
MrTee
6 months, 2 weeks ago
Selected Answer: B
To create a set of test events that can be used by all developers in an IAM developer group and that are editable by any of the IAM users in the group, the company should create and store the test events in Amazon S3 as JSON objects and allow S3 bucket access to all IAM users (Option A). This will allow all developers in the IAM developer group to access and edit the test events as needed. The other options do not provide a way for multiple developers to access and edit the test events.
upvoted 1 times
...
Fyssy
6 months, 3 weeks ago
Selected Answer: C
Use roles. Not all IAM users
upvoted 1 times
...
Fyssy
6 months, 3 weeks ago
Selected Answer: A
To create test events that can be edited by any IAM user in a developer group, the company can create an Amazon S3 bucket and store the test event data as JSON files in the bucket.
upvoted 2 times
Naj_64
3 months, 4 weeks ago
A is wrong. To edit a test you only need IAM permissions. "To see, share, and edit shareable test events, you must have permissions for all of the following..." https://docs.aws.amazon.com/lambda/latest/dg/testing-functions.html#creating-shareable-events I'll go with B.
upvoted 2 times
...
...
Question #69 Topic 1

A developer is configuring an application's deployment environment in AWS CodePipeline. The application code is stored in a GitHub repository. The developer wants to ensure that the repository package's unit tests run in the new deployment environment. The developer has already set the pipeline's source provider to GitHub and has specified the repository and branch to use in the deployment.

Which combination of steps should the developer take next to meet these requirements with the LEAST overhead? (Choose two.)

  • A. Create an AWS CodeCommit project. Add the repository package's build and test commands to the project's buildspec.
  • B. Create an AWS CodeBuild project. Add the repository package's build and test commands to the project's buildspec.
  • C. Create an AWS CodeDeploy project. Add the repository package's build and test commands to the project's buildspec.
  • D. Add an action to the source stage. Specify the newly created project as the action provider. Specify the build artifact as the action's input artifact.
  • E. Add a new stage to the pipeline after the source stage. Add an action to the new stage. Specify the newly created project as the action provider. Specify the source artifact as the action's input artifact.
Reveal Solution Hide Solution

Correct Answer: BD 🗳️

Community vote distribution
BE (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
The correct answer is B and E The buildspec file is a collection of build commands and related settings, in YAML format, that CodeBuild uses to run a build. By adding the build and test commands to the buildspec file, the developer can ensure that these commands are executed as part of the build process. Option E will ensure that the CodeBuild project is triggered as part of the pipeline and that the unit tests are run in the new deployment environment.
upvoted 14 times
...
imvb88
Highly Voted 5 months, 2 weeks ago
Selected Answer: BE
For those who just skim the question, keyword between D and E is "unit tests run in the new deployment environment.", which signifies a new stage should be created instead of just adding an action.
upvoted 10 times
...
marolisa
Most Recent 2 months, 3 weeks ago
B e D. https://docs.aws.amazon.com/pt_br/codebuild/latest/userguide/how-to-create-pipeline-add-test.html
upvoted 1 times
...
aaok
6 months ago
Selected Answer: BE
As MrTee says.
upvoted 3 times
...
Question #70 Topic 1

An engineer created an A/B test of a new feature on an Amazon CloudWatch Evidently project. The engineer configured two variations of the feature (Variation A and Variation B) for the test. The engineer wants to work exclusively with Variation A. The engineer needs to make updates so that Variation A is the only variation that appears when the engineer hits the application's endpoint.

Which solution will meet this requirement?

  • A. Add an override to the feature. Set the identifier of the override to the engineer's user ID. Set the variation to Variation A.
  • B. Add an override to the feature. Set the identifier of the override to Variation A. Set the variation to 100%.
  • C. Add an experiment to the project. Set the identifier of the experiment to Variation B. Set the variation to 0%.
  • D. Add an experiment to the project. Set the identifier of the experiment to the AWS account's account ISet the variation to Variation A.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Fyssy
Highly Voted 6 months, 3 weeks ago
Selected Answer: A
Overrides let you pre-define the variation for selected users. to always receive the editable variation. https://aws.amazon.com/blogs/aws/cloudwatch-evidently/
upvoted 9 times
jipark
3 months, 1 week ago
the key looks "override" and allow only "userID"
upvoted 1 times
...
...
Baba_Eni
Highly Voted 4 months, 3 weeks ago
Selected Answer: A
Check Bullet point 9 in the link below https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Evidently-newfeature.html
upvoted 6 times
...
hsinchang
Most Recent 1 month, 3 weeks ago
Set the variation to 0% or 100% makes no sense. Plus, the identifier should not be an account.
upvoted 2 times
...
ancomedian
3 months, 3 weeks ago
Selected Answer: A
You have to give identifier
upvoted 1 times
...
Question #71 Topic 1

A developer is working on an existing application that uses Amazon DynamoDB as its data store. The DynamoDB table has the following attributes: partNumber (partition key), vendor (sort key), description, productFamily, and productType. When the developer analyzes the usage patterns, the developer notices that there are application modules that frequently look for a list of products based on the productFamily and productType attributes.

The developer wants to make changes to the application to improve performance of the query operations.

Which solution will meet these requirements?

  • A. Create a global secondary index (GSI) with productFamily as the partition key and productType as the sort key.
  • B. Create a local secondary index (LSI) with productFamily as the partition key and productType as the sort key.
  • C. Recreate the table. Add partNumber as the partition key and vendor as the sort key. During table creation, add a local secondary index (LSI) with productFamily as the partition key and productType as the sort key.
  • D. Update the queries to use Scan operations with productFamily as the partition key and productType as the sort key.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Fyssy
Highly Voted 6 months, 3 weeks ago
Selected Answer: A
reate a Global Secondary Index (GSI): The developer should create a new GSI on the DynamoDB table with the productFamily attribute as the partition key and the productType attribute as the sort key. This will allow the application to perform fast queries on these attributes without scanning the entire table.
upvoted 8 times
...
Majong
Highly Voted 5 months, 2 weeks ago
Selected Answer: A
LSI can´t be created on an already existing table and as Fyssy says. A - create new GSI will make the querying faster and you do not need to recreate the whole table.
upvoted 5 times
...
winzzhhzzhh
Most Recent 2 months ago
Selected Answer: A
LSI: different sort key but the same partition key GSI: different partition key and a different sort key
upvoted 3 times
...
Question #72 Topic 1

A developer creates a VPC named VPC-A that has public and private subnets. The developer also creates an Amazon RDS database inside the private subnet of VPC-A. To perform some queries, the developer creates an AWS Lambda function in the default VPC. The Lambda function has code to access the RDS database. When the Lambda function runs, an error message indicates that the function cannot connect to the RDS database.

How can the developer solve this problem?

  • A. Modify the RDS security group. Add a rule to allow traffic from all the ports from the VPC CIDR block.
  • B. Redeploy the Lambda function in the same subnet as the RDS instance. Ensure that the RDS security group allows traffic from the Lambda function.
  • C. Create a security group for the Lambda function. Add a new rule in the RDS security group to allow traffic from the new Lambda security group.
  • D. Create an IAM role. Attach a policy that allows access to the RDS database. Attach the role to the Lambda function.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
B (68%)
C (32%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Fyssy
Highly Voted 6 months, 3 weeks ago
Selected Answer: B
Redeploy
upvoted 10 times
...
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: B
To solve this problem, the developer should redeploy the Lambda function in the same subnet as the RDS instance and ensure that the RDS security group allows traffic from the Lambda function. This will allow the Lambda function to access the RDS database within the private subnet of VPC-A. The developer should also make sure that the Lambda function is configured with the appropriate network settings and permissions to access resources within the VPC.
upvoted 8 times
...
hsinchang
Most Recent 1 month, 3 weeks ago
Selected Answer: B
Security group cannot include services from different VPCs, the Lambda function needs to be redeployed.
upvoted 1 times
...
love777
2 months, 1 week ago
Selected Answer: C
The issue here is most likely due to the fact that the Lambda function, running in the default VPC, is trying to access the RDS database located in another VPC (VPC-A). By default, resources in different VPCs cannot communicate directly with each other. To enable communication between the Lambda function and the RDS database in a different VPC, you should create a security group for the Lambda function and configure the RDS security group to allow traffic from the Lambda security group.
upvoted 1 times
...
r3mo
3 months, 2 weeks ago
Option 'C' is better. Because it offers a more secure, flexible, and scalable solution for allowing communication between the Lambda function and the RDS database, without tightly coupling the Lambda function with the database's network configuration. It also follows best practices for security and access control.
upvoted 2 times
jipark
3 months, 1 week ago
the key is "security group", not "IAM role"
upvoted 1 times
...
...
Naj_64
3 months, 4 weeks ago
Selected Answer: C
B and C are correct. Going with C though. C will take only a few minutes to implement while redeploying the Lambda function will definitely take more time to complete.
upvoted 3 times
...
sum_la46
4 months, 2 weeks ago
C is the correct answer
upvoted 1 times
...
hexie
4 months, 2 weeks ago
Selected Answer: C
C - well, I'm going for C in this case because the question doesnt mention either that the Lambda function NEED to be on the default VPC, but also doesnt mention that the Lambda function will reach only the RDS on the specific VPC. Imagine if there are other RDS instances on other VPCs on the same project, he would need to deploy a Lambda Function in each of them? Creating a Security Group for the Lambda Function would make easier by just assign the Security Group to any VPC that has a RDS instance :)
upvoted 4 times
...
Prem28
5 months ago
Selected Answer: C
Option A would allow all traffic from the VPC CIDR block to the RDS instance. This is not a secure configuration. Option B would move the Lambda function to the same subnet as the RDS instance. This is a possible solution, but it is not the most efficient solution. Option D would create an IAM role and attach a policy to the role that allows access to the RDS database. This would allow the Lambda function to access the RDS database, but it would not allow the Lambda function to connect to the RDS instance.
upvoted 2 times
...
Prem28
5 months, 1 week ago
c is correct Option A would allow all traffic from the VPC CIDR block to the RDS instance. This is not a secure configuration. Option B would move the Lambda function to the same subnet as the RDS instance. This is a possible solution, but it is not the most efficient solution. Option D would create an IAM role and attach a policy to the role that allows access to the RDS database. This would allow the Lambda function to access the RDS database, but it would not allow the Lambda function to connect to the RDS instance.
upvoted 1 times
Prem28
5 months, 1 week ago
we know that the Lambda function is running in the default VPC, which is a public VPC. The RDS instance is running in a private subnet, which is not accessible from the public internet. In order for the Lambda function to connect to the RDS instance, the Lambda function must be able to access the private subnet. This can be done by creating a security group for the Lambda function and adding a rule to the RDS security group to allow traffic from the Lambda security group.
upvoted 3 times
...
...
Jamshif01
5 months, 3 weeks ago
Selected Answer: B
a - no because they rds and lambda on different vpc c - same as a d - same as a and c b is the correct answer
upvoted 2 times
...
Question #73 Topic 1

A company runs an application on AWS. The company deployed the application on Amazon EC2 instances. The application stores data on Amazon Aurora.

The application recently logged multiple application-specific custom DECRYP_ERROR errors to Amazon CloudWatch logs. The company did not detect the issue until the automated tests that run every 30 minutes failed. A developer must implement a solution that will monitor for the custom errors and alert a development team in real time when these errors occur in the production environment.

Which solution will meet these requirements with the LEAST operational overhead?

  • A. Configure the application to create a custom metric and to push the metric to CloudWatch. Create an AWS CloudTrail alarm. Configure the CloudTrail alarm to use an Amazon Simple Notification Service (Amazon SNS) topic to send notifications.
  • B. Create an AWS Lambda function to run every 5 minutes to scan the CloudWatch logs for the keyword DECRYP_ERROR. Configure the Lambda function to use Amazon Simple Notification Service (Amazon SNS) to send a notification.
  • C. Use Amazon CloudWatch Logs to create a metric filter that has a filter pattern for DECRYP_ERROR. Create a CloudWatch alarm on this metric for a threshold >=1. Configure the alarm to send Amazon Simple Notification Service (Amazon SNS) notifications.
  • D. Install the CloudWatch unified agent on the EC2 instance. Configure the application to generate a metric for the keyword DECRYP_ERROR errors. Configure the agent to send Amazon Simple Notification Service (Amazon SNS) notifications.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: C
To monitor for custom DECRYP_ERROR errors and alert a development team in real time when these errors occur in the production environment with the least operational overhead, the developer should use Amazon CloudWatch Logs to create a metric filter that has a filter pattern for DECRYP_ERROR. The developer should then create a CloudWatch alarm on this metric for a threshold >=1 and configure the alarm to send Amazon Simple Notification Service (Amazon SNS) notifications (Option C). This solution will allow the developer to monitor for custom errors in real time and receive notifications when they occur with minimal operational overhead.
upvoted 8 times
...
hsinchang
Most Recent 1 month, 3 weeks ago
Selected Answer: C
A and B are not real-time, and the CloudWatch unified agent in D is used to collect metrics and logs from EC2 instances and on-premises servers, not to send notifications. So C.
upvoted 1 times
...
Fyssy
6 months, 3 weeks ago
Selected Answer: C
CloudWatch Logs can use filter expressions. For example, find a specific IP inside of a log Or count occurrences of “ERROR” in your logs• Metric filters can be used to trigger CloudWatch alarms
upvoted 2 times
...
Question #74 Topic 1

A developer created an AWS Lambda function that accesses resources in a VPC. The Lambda function polls an Amazon Simple Queue Service (Amazon SQS) queue for new messages through a VPC endpoint. Then the function calculates a rolling average of the numeric values that are contained in the messages. After initial tests of the Lambda function, the developer found that the value of the rolling average that the function returned was not accurate.

How can the developer ensure that the function calculates an accurate rolling average?

  • A. Set the function's reserved concurrency to 1. Calculate the rolling average in the function. Store the calculated rolling average in Amazon ElastiCache.
  • B. Modify the function to store the values in Amazon ElastiCache. When the function initializes, use the previous values from the cache to calculate the rolling average.
  • C. Set the function's provisioned concurrency to 1. Calculate the rolling average in the function. Store the calculated rolling average in Amazon ElastiCache.
  • D. Modify the function to store the values in the function's layers. When the function initializes, use the previously stored values to calculate the rolling average.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
B (54%)
A (44%)
3%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: B
By using ElastiCache, the Lambda function can store the values of the previous messages it has received, which can be used to calculate an accurate rolling average.
upvoted 11 times
...
eboehm
Highly Voted 4 months, 3 weeks ago
Selected Answer: A
You need to set the reserved concurrency to 1 otherwise multiple functions could run at the same time causing the math to be off. Also there was a similar question in another practice exam set that stated the same thing
upvoted 6 times
jipark
3 months, 1 week ago
reserve concurrency 1 means poll in order. this looks answer.
upvoted 1 times
...
...
Jonalb
Most Recent 1 week, 4 days ago
Selected Answer: A
Ao definir a simultaneidade reservada da função como 1, isso garante que apenas uma instância da função Lambda será invocada ao mesmo tempo. Isso pode ajudar a evitar qualquer problema de concorrência que possa causar imprecisões na média móvel. Ao calcular a média móvel na função e armazená-la no Amazon ElastiCache, a função pode acessar e atualizar rapidamente a média sempre que for invocada.
upvoted 1 times
...
dexdinh91
2 weeks, 6 days ago
Selected Answer: D
D. Modify the function to store the values in the function's layers. When the function initializes, use the previously stored values to calculate the rolling average. This is the best solution because it does not add any overhead to the function, and it does not increase the cost of running the function. Storing the values in the function's layers is a simple and effective way to ensure that the function calculates an accurate rolling average.
upvoted 1 times
...
nnecode
1 month, 1 week ago
Selected Answer: B
The best way for the developer to ensure that the function calculates an accurate rolling average is to modify the function to store the values in Amazon ElastiCache. When the function initializes, use the previous values from the cache to calculate the rolling average. This solution is the best because it ensures that the rolling average is always calculated from the latest values, even if the Lambda function is scaled out to multiple instances.
upvoted 2 times
...
nnecode
1 month, 1 week ago
Selected Answer: B
The correct answer is B. Modify the function to store the values in Amazon ElastiCache. When the function initializes, use the previous values from the cache to calculate the rolling average. This solution will ensure that the Lambda function calculates an accurate rolling average, even if the function is invoked multiple times simultaneously.
upvoted 2 times
...
sofiatian
1 month, 3 weeks ago
Selected Answer: A
Reserved concurrency is the maximum number of concurrent instances you want to allocate to your function. https://docs.aws.amazon.com/lambda/latest/dg/lambda-concurrency.html#reserved-and-provisioned
upvoted 1 times
...
love777
2 months, 1 week ago
Selected Answer: B
Explanation: In a Lambda function, maintaining state across invocations can be challenging due to the stateless nature of the serverless architecture. Option B addresses this challenge by using Amazon ElastiCache (a managed in-memory data store) to store the necessary data between invocations. By storing the values in ElastiCache, the Lambda function can retrieve the previous values upon initialization and accurately calculate the rolling average. Options A, C, and D are not the best choices for this scenario: A. Setting the function's reserved concurrency to 1 doesn't inherently solve the accuracy issue. While it might ensure sequential execution, it doesn't address the problem of maintaining state across multiple invocations.
upvoted 3 times
...
redfivedog
3 months, 1 week ago
Selected Answer: A
A is correct. Calculating the rolling average requires the messages in the SQS queue to be processed in order, so concurrent lambda executions won't work.
upvoted 3 times
...
MrPie
4 months ago
Selected Answer: A
Need to set the reserved concurrency to 1.
upvoted 2 times
...
awsazedevsh
4 months ago
Selected Answer: A
It is A. You have to set reserved instances to 1 to prevent parallel calculations.
upvoted 2 times
...
qwan
4 months ago
Selected Answer: A
First paragraph from https://docs.aws.amazon.com/lambda/latest/dg/configuration-concurrency.html
upvoted 2 times
...
sum_la46
4 months, 2 weeks ago
C is the correct answer
upvoted 2 times
...
sum_la46
4 months, 2 weeks ago
Is B Correct answer??
upvoted 1 times
...
Fyssy
6 months, 3 weeks ago
Selected Answer: B
Modify the function to store the values in Amazon ElastiCache. When the function initializes, use the previous values from the cache to calculate the rolling average.
upvoted 3 times
...
Question #75 Topic 1

A developer is writing unit tests for a new application that will be deployed on AWS. The developer wants to validate all pull requests with unit tests and merge the code with the main branch only when all tests pass.

The developer stores the code in AWS CodeCommit and sets up AWS CodeBuild to run the unit tests. The developer creates an AWS Lambda function to start the CodeBuild task. The developer needs to identify the CodeCommit events in an Amazon EventBridge event that can invoke the Lambda function when a pull request is created or updated.

Which CodeCommit event will meet these requirements?

  • A.
  • B.
  • C.
  • D.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (81%)
D (19%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Dushank
1 month, 4 weeks ago
Answer is C. There's no event call pullRequestUpdated
upvoted 4 times
...
csG13
5 months ago
Selected Answer: C
It's definitely C. Events in answer D are not real. A & B are clearly wrong since two events are required.
upvoted 4 times
...
Majong
5 months, 2 weeks ago
Selected Answer: C
Two events is needed so A and B is no. The events mentioned in D does not exist as Zodraz says (just look in the link)
upvoted 3 times
...
Prem28
6 months ago
Selected Answer: C
its c ,Event mentioned in D not Exist
upvoted 3 times
...
zodraz
6 months ago
Selected Answer: C
It' s C. Any of the events mentioned on D exist. https://docs.aws.amazon.com/codecommit/latest/userguide/monitoring-events.html#pullRequestSourceBranchUpdated
upvoted 3 times
...
zodraz
6 months ago
It' s C. Any of the events mentioned on D exist. https://docs.aws.amazon.com/codecommit/latest/userguide/monitoring-events.html#pullRequestSourceBranchUpdated
upvoted 2 times
...
Fyssy
6 months, 3 weeks ago
Selected Answer: D
"detail": { "event": ["pullRequestCreated", "pullRequestSourceBranchUpdated"]
upvoted 3 times
...
Question #76 Topic 1

A developer deployed an application to an Amazon EC2 instance. The application needs to know the public IPv4 address of the instance.

How can the application find this information?

  • A. Query the instance metadata from http://169.254.169.254/latest/meta-data/.
  • B. Query the instance user data from http://169.254.169.254/latest/user-data/.
  • C. Query the Amazon Machine Image (AMI) information from http://169.254.169.254/latest/meta-data/ami/.
  • D. Check the hosts file of the operating system.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Naj_64
3 months, 4 weeks ago
Selected Answer: A
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
upvoted 4 times
...
zodraz
6 months ago
Selected Answer: A
You can retrieve ip through http://169.254.169.254/latest/meta-data/local-ipv4 or http://169.254.169.254/latest/meta-data/public-ipv4 https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
upvoted 2 times
...
zodraz
6 months ago
Selected Answer: A
It' s C. Any of the events mentioned on D exist. https://docs.aws.amazon.com/codecommit/latest/userguide/monitoring-events.html#pullRequestSourceBranchUpdated
upvoted 2 times
zodraz
5 months, 3 weeks ago
Please remove this comment @admin
upvoted 4 times
...
...
Question #77 Topic 1

An application under development is required to store hundreds of video files. The data must be encrypted within the application prior to storage, with a unique key for each video file.

How should the developer code the application?

  • A. Use the KMS Encrypt API to encrypt the data. Store the encrypted data key and data.
  • B. Use a cryptography library to generate an encryption key for the application. Use the encryption key to encrypt the data. Store the encrypted data.
  • C. Use the KMS GenerateDataKey API to get a data key. Encrypt the data with the data key. Store the encrypted data key and data.
  • D. Upload the data to an S3 bucket using server side-encryption with an AWS KMS key.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: C
option C: use the KMS GenerateDataKey API to get a data key. Encrypt the data with the data key. Store the encrypted data key and data.
upvoted 8 times
...
Tinez
Most Recent 1 week, 1 day ago
Option C seems correct
upvoted 1 times
...
hsinchang
1 month, 3 weeks ago
Selected Answer: C
A and B cannot meet the requirement of having a unique key for each file, and D cannot meet the requirement of encrypting the data within the application. C meets all requirements.
upvoted 2 times
...
Question #78 Topic 1

A company is planning to deploy an application on AWS behind an Elastic Load Balancer. The application uses an HTTP/HTTPS listener and must access the client IP addresses.

Which load-balancing solution meets these requirements?

  • A. Use an Application Load Balancer and the X-Forwarded-For headers.
  • B. Use a Network Load Balancer (NLB). Enable proxy protocol support on the NLB and the target application.
  • C. Use an Application Load Balancer. Register the targets by the instance ID.
  • D. Use a Network Load Balancer and the X-Forwarded-For headers.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: A
Use an Application Load Balancer (ALB) and the X-Forwarded-For headers. When an ALB is used, the X-Forwarded-For header can be used to pass the client IP address to the backend servers.
upvoted 7 times
...
HuiHsin
Most Recent 4 months, 4 weeks ago
is A https://docs.aws.amazon.com/elasticloadbalancing/latest/application/x-forwarded-headers.html https://aws.amazon.com/elasticloadbalancing/features/?nc=sn&loc=2
upvoted 3 times
...
Question #79 Topic 1

A developer wants to debug an application by searching and filtering log data. The application logs are stored in Amazon CloudWatch Logs. The developer creates a new metric filter to count exceptions in the application logs. However, no results are returned from the logs.

What is the reason that no filtered results are being returned?

  • A. A setup of the Amazon CloudWatch interface VPC endpoint is required for filtering the CloudWatch Logs in the VPC.
  • B. CloudWatch Logs only publishes metric data for events that happen after the filter is created.
  • C. The log group for CloudWatch Logs should be first streamed to Amazon OpenSearch Service before metric filtering returns the results.
  • D. Metric data points for logs groups can be filtered only after they are exported to an Amazon S3 bucket.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
zodraz
Highly Voted 6 months ago
Selected Answer: B
Filters do not retroactively filter data. Filters only publish the metric data points for events that happen after the filter was created. https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringLogData.html
upvoted 9 times
...
Dushank
Most Recent 1 month, 4 weeks ago
Selected Answer: B
Metric filters in Amazon CloudWatch Logs are applied only to new log events. If you create a metric filter and are looking to count exceptions, the filter will only apply to log events generated after the metric filter was created. Existing logs are not scanned.
upvoted 3 times
...
Question #80 Topic 1

A company is planning to use AWS CodeDeploy to deploy an application to Amazon Elastic Container Service (Amazon ECS). During the deployment of a new version of the application, the company initially must expose only 10% of live traffic to the new version of the deployed application. Then, after 15 minutes elapse, the company must route all the remaining live traffic to the new version of the deployed application.

Which CodeDeploy predefined configuration will meet these requirements?

  • A. CodeDeployDefault.ECSCanary10Percent15Minutes
  • B. CodeDeployDefault.LambdaCanary10Percent5Minutes
  • C. CodeDeployDefault.LambdaCanary10Percentl15Minutes
  • D. CodeDeployDefault.ECSLinear10PercentEvery1Minutes
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
zodraz
Highly Voted 6 months ago
Selected Answer: A
https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-configurations.html
upvoted 6 times
...
Dushank
Most Recent 1 month, 4 weeks ago
Selected Answer: A
This predefined deployment configuration for AWS CodeDeploy with Amazon ECS will initially shift 10% of the traffic to the new version and wait for 15 minutes before shifting the remaining 90% of the traffic to the new version.
upvoted 3 times
...
Question #81 Topic 1

A company hosts a batch processing application on AWS Elastic Beanstalk with instances that run the most recent version of Amazon Linux. The application sorts and processes large datasets.

In recent weeks, the application's performance has decreased significantly during a peak period for traffic. A developer suspects that the application issues are related to the memory usage. The developer checks the Elastic Beanstalk console and notices that memory usage is not being tracked.

How should the developer gather more information about the application performance issues?

  • A. Configure the Amazon CloudWatch agent to push logs to Amazon CloudWatch Logs by using port 443.
  • B. Configure the Elastic Beanstalk .ebextensions directory to track the memory usage of the instances.
  • C. Configure the Amazon CloudWatch agent to track the memory usage of the instances.
  • D. Configure an Amazon CloudWatch dashboard to track the memory usage of the instances.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
C (59%)
B (41%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: C
Configure the Amazon CloudWatch agent to track the memory usage of the instances.
upvoted 14 times
...
eboehm
Highly Voted 4 months, 3 weeks ago
Selected Answer: B
for elastic beanstalk you make this configuration in the .ebtextensions folder https://repost.aws/knowledge-center/elastic-beanstalk-memory-metrics-windows
upvoted 8 times
DumPisach
4 months, 2 weeks ago
But the question says Linux
upvoted 2 times
Naj_64
3 months, 1 week ago
Applies to Linux as well: https://medium.com/tomincode/cloudwatch-memory-monitoring-for-elastic-beanstalk-1caa98d57d5c
upvoted 1 times
...
...
...
Nagasoracle
Most Recent 2 weeks, 6 days ago
Selected Answer: B
I vote for B, since it is already available with .ebextensions and not required agent
upvoted 1 times
...
Dushank
1 month, 4 weeks ago
Selected Answer: C
Amazon CloudWatch does not collect memory metrics by default. You need to install the CloudWatch agent on your instances to collect this additional system-level metric like memory utilization.
upvoted 3 times
...
love777
2 months, 1 week ago
Selected Answer: C
The .ebextensions directory is used for configuration and customization settings, but it doesn't directly enable tracking memory usage metrics.
upvoted 2 times
...
fossil123
2 months, 1 week ago
Selected Answer: B
You can provision Elastic Beanstalk configuration files (.ebextensions) to monitor memory utilization with CloudWatch.
upvoted 1 times
...
Naj_64
3 months, 1 week ago
Selected Answer: B
https://medium.com/tomincode/cloudwatch-memory-monitoring-for-elastic-beanstalk-1caa98d57d5c
upvoted 2 times
...
jasper_pigeon
3 months, 2 weeks ago
You can use .ebextensions direcory which contains Cloudwatch Agent. Bit It didn't mention about CloudWatch Agent.
upvoted 2 times
...
qwan
4 months ago
Selected Answer: B
This is for linux https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-cw.html
upvoted 3 times
...
rlnd2000
4 months, 3 weeks ago
Selected Answer: C
I will go with C The questions said ...issues are related to the memory usage. it is talking about the memory in the instances the app is running, CloudWatch agent is needed in each instance.
upvoted 3 times
...
Question #82 Topic 1

A developer is building a highly secure healthcare application using serverless components. This application requires writing temporary data to /tmp storage on an AWS Lambda function.

How should the developer encrypt this data?

  • A. Enable Amazon EBS volume encryption with an AWS KMS key in the Lambda function configuration so that all storage attached to the Lambda function is encrypted.
  • B. Set up the Lambda function with a role and key policy to access an AWS KMS key. Use the key to generate a data key used to encrypt all data prior to writing to /tmp storage.
  • C. Use OpenSSL to generate a symmetric encryption key on Lambda startup. Use this key to encrypt the data prior to writing to /tmp.
  • D. Use an on-premises hardware security module (HSM) to generate keys, where the Lambda function requests a data key from the HSM and uses that to encrypt data on all requests to the function.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Milan61
1 month ago
B is the solution
upvoted 1 times
...
Yuxing_Li
2 months, 1 week ago
Selected Answer: B
Go with B
upvoted 2 times
...
abdelbz16
6 months, 1 week ago
Selected Answer: B
B is the best solution
upvoted 4 times
...
MrTee
6 months, 2 weeks ago
Selected Answer: B
is the best solution for encrypting temporary data written to /tmp storage on an AWS Lambda function
upvoted 4 times
...
Question #83 Topic 1

A developer has created an AWS Lambda function to provide notification through Amazon Simple Notification Service (Amazon SNS) whenever a file is uploaded to Amazon S3 that is larger than 50 MB. The developer has deployed and tested the Lambda function by using the CLI. However, when the event notification is added to the S3 bucket and a 3,000 MB file is uploaded, the Lambda function does not launch.

Which of the following is a possible reason for the Lambda function's inability to launch?

  • A. The S3 event notification does not activate for files that are larger than 1,000 MB.
  • B. The resource-based policy for the Lambda function does not have the required permissions to be invoked by Amazon S3.
  • C. Lambda functions cannot be invoked directly from an S3 event.
  • D. The S3 bucket needs to be made public.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (88%)
13%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Jamshif01
Highly Voted 5 months, 3 weeks ago
Selected Answer: B
B - is right answer A is incorrect because the size of the file should not affect whether the event notification is triggered. C is incorrect because Lambda functions can indeed be invoked directly from an S3 event. D is incorrect because the S3 bucket does not need to be made public for the Lambda function to be invoked. (c)chatgpt
upvoted 5 times
...
Prem28
Most Recent 5 months, 1 week ago
B A. The S3 event notification does not activate for files that are larger than 1,000 MB. This is not the case. S3 event notifications can activate for files that are larger than 1,000 MB. C. Lambda functions cannot be invoked directly from an S3 event. This is also not the case. Lambda functions can be invoked directly from an S3 event. D. The S3 bucket needs to be made public. This is not necessary. The S3 bucket does not need to be made public in order for the Lambda function to be invoked.
upvoted 2 times
...
chumji
5 months, 4 weeks ago
Selected Answer: B
anser is B
upvoted 2 times
...
junrun3
5 months, 4 weeks ago
Selected Answer: A
Ansewer A
upvoted 1 times
junrun3
5 months, 4 weeks ago
not A, answer is B
upvoted 3 times
...
...
Question #84 Topic 1

A developer is creating a Ruby application and needs to automate the deployment, scaling, and management of an environment without requiring knowledge of the underlying infrastructure.

Which service would best accomplish this task?

  • A. AWS CodeDeploy
  • B. AWS CloudFormation
  • C. AWS OpsWorks
  • D. AWS Elastic Beanstalk
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Prem28
Highly Voted 5 months, 1 week ago
answer- d AWS CodeDeploy can automate the deployment of code to any instance, including Amazon EC2 instances and on-premises servers. However, it does not provide the same level of automation as Elastic Beanstalk, and it requires more manual intervention from developers. AWS CloudFormation can help you model and set up your AWS resources. However, it does not provide any automation for deploying or managing applications. AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. However, it is not as easy to use as Elastic Beanstalk, and it does not provide the same level of automation for deploying or managing applications.
upvoted 8 times
...
zodraz
Highly Voted 6 months ago
Selected Answer: D
https://www.examtopics.com/discussions/amazon/view/88659-exam-aws-certified-developer-associate-topic-1-question-197/
upvoted 5 times
...
Dushank
Most Recent 1 month, 4 weeks ago
Selected Answer: D
AWS Elastic Beanstalk is designed for developers like the one in your scenario who want to deploy and manage applications without worrying about the underlying infrastructure. It automates the deployment process and automatically handles capacity provisioning, load balancing, auto-scaling, and application health monitoring. You can use it with various platforms including Ruby.
upvoted 2 times
...
Question #85 Topic 1

A company has a web application that is deployed on AWS. The application uses an Amazon API Gateway API and an AWS Lambda function as its backend.

The application recently demonstrated unexpected behavior. A developer examines the Lambda function code, finds an error, and modifies the code to resolve the problem. Before deploying the change to production, the developer needs to run tests to validate that the application operates properly.

The application has only a production environment available. The developer must create a new development environment to test the code changes. The developer must also prevent other developers from overwriting these changes during the test cycle.

Which combination of steps will meet these requirements with the LEAST development effort? (Choose two.)

  • A. Create a new resource in the current stage. Create a new method with Lambda proxy integration. Select the Lambda function. Add the hotfix alias. Redeploy the current stage. Test the backend.
  • B. Update the Lambda function in the API Gateway API integration request to use the hotfix alias. Deploy the API Gateway API to a new stage named hotfix. Test the backend.
  • C. Modify the Lambda function by fixing the code. Test the Lambda function. Create the alias hotfix. Point the alias to the $LATEST version.
  • D. Modify the Lambda function by fixing the code. Test the Lambda function. When the Lambda function is working as expected, publish the Lambda function as a new version. Create the alias hotfix. Point the alias to the new version.
  • E. Create a new API Gateway API for the development environment. Add a resource and method with Lambda integration. Choose the Lambda function and the hotfix alias. Deploy to a new stage. Test the backend.
Reveal Solution Hide Solution

Correct Answer: BD 🗳️

Community vote distribution
BD (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Ponyi
14 hours, 18 minutes ago
Selected Answer: BD
Why D over C? Versions are immutable. $Latest is mutable, which means anyone access to Lambda can edit and deploy a new code. The question simply doesn't want that. Why B over E? You don't need to create a whole new API to test some new feature. You can simply achieve this by deploying it to a different stage. Afterwards, you can redirect the users to a new stage or do A/B testing.
upvoted 1 times
...
r3mo
3 months, 1 week ago
C - D. C vs B : option C is preferred over option B because it provides a more isolated and controlled environment for testing the hotfix without directly affecting the production environment. It gives you the flexibility to iterate on the hotfix if needed and promotes a safer development and testing process. D vs E : Option E is preferred over option D because it provides a more isolated and controlled environment for testing the hotfix. It avoids version management complexities and promotes a safer development and testing process by creating a dedicated development environment.
upvoted 3 times
...
tttamtttam
3 months, 3 weeks ago
Selected Answer: BD
D ==> change the lambda function. B ==> update the API gateway to use the updated lambda function & deploy it into another(new) stage. so that developers can use the newly deployed API endpoint.
upvoted 3 times
...
csG13
5 months ago
Selected Answer: BD
It is B & D. Clearly E isn't operationally efficient. So we got to choose from A & B one, and C & D the second. Between A & B, we gotta pick B since in the question it clearly states that we don't want to touch the existing solution. Regarding C & D, seems like D is more thorough and also pointing to $LATEST is not sufficiently explicit when you troubleshoot.
upvoted 3 times
...
zodraz
6 months ago
Selected Answer: BD
https://www.examtopics.com/discussions/amazon/view/89549-exam-aws-certified-developer-associate-topic-1-question-334/
upvoted 2 times
...
Question #86 Topic 1

A developer is implementing an AWS Cloud Development Kit (AWS CDK) serverless application. The developer will provision several AWS Lambda functions and Amazon API Gateway APIs during AWS CloudFormation stack creation. The developer's workstation has the AWS Serverless Application Model (AWS SAM) and the AWS CDK installed locally.

How can the developer test a specific Lambda function locally?

  • A. Run the sam package and sam deploy commands. Create a Lambda test event from the AWS Management Console. Test the Lambda function.
  • B. Run the cdk synth and cdk deploy commands. Create a Lambda test event from the AWS Management Console. Test the Lambda function.
  • C. Run the cdk synth and sam local invoke commands with the function construct identifier and the path to the synthesized CloudFormation template.
  • D. Run the cdk synth and sam local start-lambda commands with the function construct identifier and the path to the synthesized CloudFormation template.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: C
The developer can test a specific Lambda function locally by running the cdk synth command to synthesize the AWS CDK application into an AWS CloudFormation template. Then, the developer can use the sam local invoke command with the function construct identifier and the path to the synthesized CloudFormation template to test the Lambda function locally (option C).
upvoted 7 times
...
Dushank
Most Recent 1 month, 4 weeks ago
Selected Answer: C
o test a specific Lambda function locally when using the AWS Cloud Development Kit (AWS CDK), the developer can use the AWS Serverless Application Model (AWS SAM) CLI's local testing capabilities in conjunction with the CDK. The typical process would be: Run cdk synth to synthesize the AWS CDK app into a CloudFormation template. Use sam local invoke to run the specific Lambda function locally, providing the function's logical identifier and the path to the synthesized CloudFormation template as arguments.
upvoted 4 times
...
fossil123
2 months, 1 week ago
Selected Answer: C
Use the AWS SAM CLI sam local invoke subcommand to initiate a one-time invocation of an AWS Lambda function locally. https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/using-sam-cli-local-invoke.html
upvoted 2 times
...
JamalDaBoss
3 months ago
Selected Answer: C
Answer is clearly C. If you say it's not C, you are wrong.
upvoted 2 times
...
zodraz
6 months ago
Selected Answer: C
sam local invoke StackLogicalId/FunctionLogicalId https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/using-sam-cli-local-invoke.html
upvoted 4 times
...
Question #87 Topic 1

A company's new mobile app uses Amazon API Gateway. As the development team completes a new release of its APIs, a developer must safely and transparently roll out the API change.

What is the SIMPLEST solution for the developer to use for rolling out the new API version to a limited number of users through API Gateway?

  • A. Create a new API in API Gateway. Direct a portion of the traffic to the new API using an Amazon Route 53 weighted routing policy.
  • B. Validate the new API version and promote it to production during the window of lowest expected utilization.
  • C. Implement an Amazon CloudWatch alarm to trigger a rollback if the observed HTTP 500 status code rate exceeds a predetermined threshold.
  • D. Use the canary release deployment option in API Gateway. Direct a percentage of the API traffic using the canarySettings setting.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Dushank
1 month, 4 weeks ago
Selected Answer: D
Canary deployments allow you to divert a percentage of your API traffic to a new API version, enabling you to test how the new version will perform under real-world conditions without fully replacing the previous version. This is especially useful for reducing the risk associated with deploying new versions.
upvoted 3 times
...
zodraz
6 months ago
Selected Answer: D
https://www.examtopics.com/discussions/amazon/view/51596-exam-aws-certified-developer-associate-topic-1-question-355/
upvoted 4 times
...
Question #88 Topic 1

A company caches session information for a web application in an Amazon DynamoDB table. The company wants an automated way to delete old items from the table.

What is the simplest way to do this?

  • A. Write a script that deletes old records; schedule the script as a cron job on an Amazon EC2 instance.
  • B. Add an attribute with the expiration time; enable the Time To Live feature based on that attribute.
  • C. Each day, create a new table to hold session data; delete the previous day's table.
  • D. Add an attribute with the expiration time; name the attribute ItemExpiration.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Dushank
1 month, 4 weeks ago
Selected Answer: B
The simplest way to automatically delete old items from an Amazon DynamoDB table is to use DynamoDB's Time to Live (TTL) feature. This feature allows you to define an attribute that stores the expiration time for each item. Once the specified time has passed, DynamoDB automatically deletes the expired items, freeing up storage and reducing costs without the need for custom scripts or manual intervention.
upvoted 3 times
...
catcatpunch
5 months, 1 week ago
https://docs.aws.amazon.com/ko_kr/amazondynamodb/latest/developerguide/TTL.html
upvoted 1 times
...
zodraz
6 months ago
Selected Answer: B
https://www.examtopics.com/discussions/amazon/view/7225-exam-aws-certified-developer-associate-topic-1-question-107/
upvoted 4 times
...
Question #89 Topic 1

A company is using an Amazon API Gateway REST API endpoint as a webhook to publish events from an on-premises source control management (SCM) system to Amazon EventBridge. The company has configured an EventBridge rule to listen for the events and to control application deployment in a central AWS account. The company needs to receive the same events across multiple receiver AWS accounts.

How can a developer meet these requirements without changing the configuration of the SCM system?

  • A. Deploy the API Gateway REST API to all the required AWS accounts. Use the same custom domain name for all the gateway endpoints so that a single SCM webhook can be used for all events from all accounts.
  • B. Deploy the API Gateway REST API to all the receiver AWS accounts. Create as many SCM webhooks as the number of AWS accounts.
  • C. Grant permission to the central AWS account for EventBridge to access the receiver AWS accounts. Add an EventBridge event bus on the receiver AWS accounts as the targets to the existing EventBridge rule.
  • D. Convert the API Gateway type from REST API to HTTP API.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
csG13
Highly Voted 5 months ago
Selected Answer: C
It's C - eventbridge event buses in one (target) account can be a target of another event rule in a source account. For reference, watch the video in the following link: https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-cross-account.html
upvoted 6 times
...
Question #90 Topic 1

A company moved some of its secure files to a private Amazon S3 bucket that has no public access. The company wants to develop a serverless application that gives its employees the ability to log in and securely share the files with other users.

Which AWS feature should the company use to share and access the files securely?

  • A. Amazon Cognito user pool
  • B. S3 presigned URLs
  • C. S3 bucket policy
  • D. Amazon Cognito identity pool
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
B (45%)
D (34%)
A (21%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
loctong
Highly Voted 5 months, 3 weeks ago
Selected Answer: A
the key words are ability to log in and securely share the files. It is A
upvoted 9 times
jipark
3 months ago
I agree 'log in' would go user pool.
upvoted 2 times
...
...
Dushank
Highly Voted 1 month, 4 weeks ago
Selected Answer: B
Employees log into the serverless application using an Amazon Cognito User Pool. Once logged in, the application's back-end logic (possibly a Lambda function) generates an S3 pre-signed URL for the requested file. The pre-signed URL is then given to the authenticated user, allowing them secure, time-limited access to that specific S3 object. So, while both Amazon Cognito User Pool and S3 Pre-signed URLs would be used in the solution, S3 Pre-signed URLs (Option B) are the specific feature that allows for the secure, temporary sharing of S3 files. Therefore, Option B would be the best answer to the question of how to "share and access the files securely."
upvoted 8 times
...
didorins
Most Recent 1 week, 6 days ago
Login of external to AWS users, we can use Cognito. Identity Pool is specifically for DynamoDB and S3. Use an identity pool when you need to: Give your users access to AWS resources, such as an Amazon Simple Storage Service (Amazon S3) bucket or an Amazon DynamoDB table. https://repost.aws/knowledge-center/cognito-user-pools-identity-pools
upvoted 1 times
...
Rameez1
2 weeks, 6 days ago
Selected Answer: B
Actual ask is in the final line "Which AWS feature should the company use to share and access the files securely?" -> S3 Pre-signed URL provides the most secure feature.
upvoted 1 times
...
EMPERBACH
1 month, 2 weeks ago
Selected Answer: B
Secure solution for sharing private s3 resource
upvoted 1 times
...
Iamtany
1 month, 3 weeks ago
Selected Answer: B
I say 'B' because: The question is "Which AWS feature should the company use to share and access the files securely?" if you look at this part there is no mention about login part. Though there is requirement for the application as a whole, the question targets only about sharing and accessing files securely.
upvoted 4 times
...
fossil123
2 months, 1 week ago
Selected Answer: A
'Login' points to A
upvoted 2 times
...
Yuxing_Li
2 months, 1 week ago
Selected Answer: D
You need access to S3
upvoted 2 times
...
breadops
2 months, 1 week ago
Selected Answer: B
Which AWS feature should the company use to share and access the files securely? It's B - S3 Presigned URLs.
upvoted 2 times
...
hmdev
2 months, 2 weeks ago
I think it's A cuz we need to log in. The context doesn't say anything indicating federated users so doesn't look like D. Also, A user needs to log in to create a pre-signed URL.
upvoted 1 times
...
Naj_64
2 months, 2 weeks ago
Selected Answer: D
https://www.techtarget.com/searchcloudcomputing/feature/Cognito-user-pools-vs-identity-pools-what-AWS-users-should-know "On the other hand, identity pools are primarily used for authorization. This second Cognito feature, also known as federated identities, has two common use cases -- to provide access to different AWS resources and to create temporary credentials for unauthenticated users" "User pools alone do not deal with any IAM-level permissions but provide critical information so the enterprise can authorize the users"
upvoted 3 times
...
andrevus
2 months, 3 weeks ago
Selected Answer: B
question is a key. Which AWS feature should the company use to share and access the files securely? you cannot share files with cognito!
upvoted 2 times
...
Sbon24
3 months ago
Selected Answer: D
Option D is correct. https://repost.aws/knowledge-center/cognito-user-pools-identity-pools
upvoted 3 times
...
bindu545
3 months ago
B. S3 presigned URLs Explanation: Using S3 presigned URLs is the most secure way to give employees the ability to access and share files securely from a private S3 bucket. Using Amazon Cognito user pool (Option A) and Amazon Cognito identity pool (Option D) can help with user authentication and identity management, but they don't directly handle secure sharing and access to files from a private S3 bucket. Option C (S3 bucket policy) is used to control access to the S3 bucket and its objects, but it's not recommended to make the bucket public or grant access to unauthorized users. Using S3 presigned URLs is a more secure approach to control access to specific objects for a limited time.
upvoted 1 times
...
r3mo
3 months, 1 week ago
Option D (Amazon Cognito identity pool) is the correct choice to securely share and access the files in the private S3 bucket, providing a secure and managed way for employees to log in and access the files while controlling access to other users.
upvoted 1 times
...
bobo777
3 months, 2 weeks ago
Selected Answer: D
Only Cognito Identity pool (combined with User pool) allows users from social networks to log in and get access to AWS resources.
upvoted 3 times
...
jasper_pigeon
3 months, 2 weeks ago
For both ability to log in and securely share the files, Cognito identity pool is the only answer. Users can log in via public social, OIDC, SAML and Cognito User Pools. S3 presigned URLs are for temporaray usage.
upvoted 3 times
...
Question #91 Topic 1

A company needs to develop a proof of concept for a web service application. The application will show the weather forecast for one of the company's office locations. The application will provide a REST endpoint that clients can call. Where possible, the application should use caching features provided by AWS to limit the number of requests to the backend service. The application backend will receive a small amount of traffic only during testing.

Which approach should the developer take to provide the REST endpoint MOST cost-effectively?

  • A. Create a container image. Deploy the container image by using Amazon Elastic Kubernetes Service (Amazon EKS). Expose the functionality by using Amazon API Gateway.
  • B. Create an AWS Lambda function by using the AWS Serverless Application Model (AWS SAM). Expose the Lambda functionality by using Amazon API Gateway.
  • C. Create a container image. Deploy the container image by using Amazon Elastic Container Service (Amazon ECS). Expose the functionality by using Amazon API Gateway.
  • D. Create a microservices application. Deploy the application to AWS Elastic Beanstalk. Expose the AWS Lambda functionality by using an Application Load Balancer.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
loctong
Highly Voted 5 months, 3 weeks ago
Selected Answer: B
AWS Lambda function absolutely ability to do the requirements.
upvoted 5 times
JamalDaBoss
3 months ago
Yes, Lambda bery certain great.
upvoted 2 times
...
...
hmdev
Most Recent 2 months, 2 weeks ago
Selected Answer: B
B is the cost-effective one.
upvoted 2 times
...
Question #92 Topic 1

An e-commerce web application that shares session state on-premises is being migrated to AWS. The application must be fault tolerant, natively highly scalable, and any service interruption should not affect the user experience.

What is the best option to store the session state?

  • A. Store the session state in Amazon ElastiCache.
  • B. Store the session state in Amazon CloudFront.
  • C. Store the session state in Amazon S3.
  • D. Enable session stickiness using elastic load balancers.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Phongsanth
4 months, 2 weeks ago
Selected Answer: A
I vote A https://aws.amazon.com/blogs/developer/elasticache-as-an-asp-net-session-store/
upvoted 2 times
...
loctong
5 months, 3 weeks ago
Selected Answer: A
the answer came from the discussion at https://www.examtopics.com/discussions/amazon/view/8789-exam-aws-certified-developer-associate-topic-1-question-176/
upvoted 3 times
...
zodraz
6 months ago
Selected Answer: A
https://www.examtopics.com/discussions/amazon/view/8789-exam-aws-certified-developer-associate-topic-1-question-176/
upvoted 4 times
...
Question #93 Topic 1

A developer is building an application that uses Amazon DynamoDB. The developer wants to retrieve multiple specific items from the database with a single API call.

Which DynamoDB API call will meet these requirements with the MINIMUM impact on the database?

  • A. BatchGetItem
  • B. GetItem
  • C. Scan
  • D. Query
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: A
A Is the correct answer with the minimum impact on the database.
upvoted 8 times
...
dan80
Highly Voted 6 months, 1 week ago
Selected Answer: A
https://beabetterdev.com/2022/10/12/dynamodb-getitem-vs-query-when-to-use-what/#:~:text=If%20you'd%20like%20to%20retrieve%20multiple%20items%20at%20once,retrieve%20multiple%20items%20at%20once.
upvoted 6 times
jipark
3 months ago
tons of thanks. Looking for just a single item on the main table index? Use GetItem Looking for just a single item on a GSI? Use Query. Looking for multiple items with different partition key and sort key combinations at once? Use BatchGetItem Looking for multiple items that share the same partition key? Use Query
upvoted 3 times
...
...
marolisa
Most Recent 1 month, 2 weeks ago
D. "Query" allows you to use filter - multiple specific items and is less expensive than the Sacan operation.
upvoted 1 times
...
Baba_Eni
4 months, 3 weeks ago
Selected Answer: A
https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchGetItem.html
upvoted 1 times
...
imvb88
5 months, 2 weeks ago
Selected Answer: A
Need specific Item -> cannot be Scan or Query since they are for retrieving items that match conditions. We need multiple item then A is the option left.
upvoted 1 times
...
Question #94 Topic 1

A developer has written an application that runs on Amazon EC2 instances. The developer is adding functionality for the application to write objects to an Amazon S3 bucket.

Which policy must the developer modify to allow the instances to write these objects?

  • A. The IAM policy that is attached to the EC2 instance profile role
  • B. The session policy that is applied to the EC2 instance role session
  • C. The AWS Key Management Service (AWS KMS) key policy that is attached to the EC2 instance profile role
  • D. The Amazon VPC endpoint policy
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Ja13
5 months ago
Selected Answer: A
A: https://repost.aws/knowledge-center/ec2-instance-access-s3-bucket
upvoted 3 times
...
mgonblan
5 months, 1 week ago
B: I Think B is better, because we need to use it on the instance session
upvoted 1 times
...
Prem28
5 months, 3 weeks ago
Selected Answer: A
a is correct
upvoted 4 times
...
Question #95 Topic 1

A developer is leveraging a Border Gateway Protocol (BGP)-based AWS VPN connection to connect from on-premises to Amazon EC2 instances in the developer's account. The developer is able to access an EC2 instance in subnet A, but is unable to access an EC2 instance in subnet B in the same VPC.

Which logs can the developer use to verify whether the traffic is reaching subnet B?

  • A. VPN logs
  • B. BGP logs
  • C. VPC Flow Logs
  • D. AWS CloudTrail logs
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Dushank
1 month, 4 weeks ago
Selected Answer: C
VPC Flow Logs capture information about the IP traffic going to and from network interfaces in a VPC. This includes traffic that traverses a VPN connection. VPC Flow Logs can be used to monitor and troubleshoot connectivity issues, including verifying whether traffic is reaching a particular subnet within the VPC.
upvoted 3 times
...
Prem28
5 months, 3 weeks ago
Selected Answer: C
https://www.examtopics.com/discussions/amazon/view/28802-exam-aws-certified-developer-associate-topic-1-question-219/
upvoted 3 times
...
zodraz
6 months ago
Selected Answer: C
https://www.examtopics.com/discussions/amazon/view/28802-exam-aws-certified-developer-associate-topic-1-question-219/
upvoted 3 times
...
Question #96 Topic 1

A developer is creating a service that uses an Amazon S3 bucket for image uploads. The service will use an AWS Lambda function to create a thumbnail of each image. Each time an image is uploaded, the service needs to send an email notification and create the thumbnail. The developer needs to configure the image processing and email notifications setup.

Which solution will meet these requirements?

  • A. Create an Amazon Simple Notification Service (Amazon SNS) topic. Configure S3 event notifications with a destination of the SNS topic. Subscribe the Lambda function to the SNS topic. Create an email notification subscription to the SNS topic.
  • B. Create an Amazon Simple Notification Service (Amazon SNS) topic. Configure S3 event notifications with a destination of the SNS topic. Subscribe the Lambda function to the SNS topic. Create an Amazon Simple Queue Service (Amazon SQS) queue. Subscribe the SQS queue to the SNS topic. Create an email notification subscription to the SQS queue.
  • C. Create an Amazon Simple Queue Service (Amazon SQS) queue. Configure S3 event notifications with a destination of the SQS queue. Subscribe the Lambda function to the SQS queue. Create an email notification subscription to the SQS queue.
  • D. Create an Amazon Simple Queue Service (Amazon SQS) queue. Send S3 event notifications to Amazon EventBridge. Create an EventBridge rule that runs the Lambda function when images are uploaded to the S3 bucket. Create an EventBridge rule that sends notifications to the SQS queue. Create an email notification subscription to the SQS queue.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: A
This solution will allow the developer to receive notifications for each image uploaded to the S3 bucket, and also create a thumbnail using the Lambda function. The SNS topic will serve as a trigger for both the Lambda function and the email notification subscription. When an image is uploaded, S3 will send a notification to the SNS topic, which will trigger the Lambda function to create the thumbnail and also send an email notification to the specified email address.
upvoted 11 times
jipark
3 months ago
greate !! send email do not need SQS.
upvoted 1 times
...
...
Question #97 Topic 1

A developer has designed an application to store incoming data as JSON files in Amazon S3 objects. Custom business logic in an AWS Lambda function then transforms the objects, and the Lambda function loads the data into an Amazon DynamoDB table. Recently, the workload has experienced sudden and significant changes in traffic. The flow of data to the DynamoDB table is becoming throttled.

The developer needs to implement a solution to eliminate the throttling and load the data into the DynamoDB table more consistently.

Which solution will meet these requirements?

  • A. Refactor the Lambda function into two functions. Configure one function to transform the data and one function to load the data into the DynamoDB table. Create an Amazon Simple Queue Service (Amazon SQS) queue in between the functions to hold the items as messages and to invoke the second function.
  • B. Turn on auto scaling for the DynamoDB table. Use Amazon CloudWatch to monitor the table's read and write capacity metrics and to track consumed capacity.
  • C. Create an alias for the Lambda function. Configure provisioned concurrency for the application to use.
  • D. Refactor the Lambda function into two functions. Configure one function to store the data in the DynamoDB table. Configure the second function to process the data and update the items after the data is stored in DynamoDB. Create a DynamoDB stream to invoke the second function after the data is stored.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
A (52%)
B (26%)
D (22%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
ihebchorfi
Highly Voted 6 months, 1 week ago
Selected Answer: A
A. Refactor the Lambda function into two functions. Configure one function to transform the data and one function to load the data into the DynamoDB table. Create an Amazon Simple Queue Service (Amazon SQS) queue in between the functions to hold the items as messages and to invoke the second function. By breaking the Lambda function into two separate functions and using an SQS queue to hold the transformed data as messages, you can decouple the data transformation and loading processes. This allows for more controlled loading of data into the DynamoDB table and helps eliminate throttling issues.
upvoted 13 times
...
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: D
This solution will allow the developer to store the incoming data into the DynamoDB table more consistently without being throttled. By splitting the Lambda function into two functions, the first function can store the data into the DynamoDB table and exit quickly, avoiding any throttling issues. The second function can then process the data and update the items after the data is stored in DynamoDB using a DynamoDB stream to invoke the second function. Option A is also a good option but not the best solution because it introduces additional complexity and cost by using an Amazon SQS queue.
upvoted 7 times
robotgeek
5 months, 1 week ago
Sorry but when you say "the first function can store the data into the DynamoDB table and exit quickly, avoiding any throttling issues" I dont understand your point
upvoted 3 times
...
...
Nagasoracle
Most Recent 2 weeks, 6 days ago
Selected Answer: A
Answer : A SQS can be configured to invoke Lambda. https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-lambda-function-trigger.html
upvoted 2 times
...
dexdinh91
2 weeks, 6 days ago
Selected Answer: B
I think B
upvoted 1 times
...
jingle4944
3 weeks, 6 days ago
Lambda functions can be triggered by SQS: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-lambda-function-trigger.html
upvoted 1 times
...
Balliache520505
1 month, 1 week ago
Selected Answer: B
I don't believe that option A is correct because an Amazon SQS queue wouldn't invoke a Lambda function; in any case, the Lambda function would be configured to retrieve messages from the SQS queue. For that reason, I believe option B would be the correct choice in this case.
upvoted 1 times
Chicote
1 week, 4 days ago
ESTAS BIEN PENDEJO
upvoted 1 times
...
...
Dushank
1 month, 4 weeks ago
Selected Answer: A
Refactoring the Lambda function into two functions and introducing an Amazon Simple Queue Service (Amazon SQS) queue between them would provide a buffering mechanism. The first Lambda function would transform the data and push it to the SQS queue. The second Lambda function would be triggered by messages in the SQS queue to write the data into DynamoDB. This decouples the two operations and allows for more controlled and consistent data loading into DynamoDB, helping to avoid throttling.
upvoted 1 times
...
jipark
3 months ago
Selected Answer: A
the requirement is Lambda function load data to DynamoDB. D is incorrect : "DynamoDB stream invoke Lambda" - the order is reversed.
upvoted 2 times
...
baboopan18
3 months, 2 weeks ago
Selected Answer: B
The key point is "eliminate the throttling" I prefer B than A
upvoted 3 times
...
qwan
4 months ago
Selected Answer: D
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-basic-architecture.html This is the lifecycle for a SQS message. For my understanding, option A is wrong. SQS cannot invoke function, like is it stated there. So D it's the right answer.
upvoted 1 times
tttamtttam
3 months, 3 weeks ago
Lambda functions can be triggered by messages in a SQS queue.
upvoted 4 times
...
...
eberhe900
4 months ago
Selected Answer: B
The developer needs to implement a solution to eliminate the throttling and load the data into the DynamoDB table more consistently. The problem is in DynamoDB does not associate with the lambada. Then the better solution is to auto scale the table of the DynamoDB.
upvoted 5 times
...
Phongsanth
4 months ago
Selected Answer: A
SQS between Lambda function should deliver the traffic more consistently.
upvoted 2 times
...
gagol14
4 months, 2 weeks ago
Selected Answer: A
This solution will not meet the requirements because it will not load the data into the DynamoDB table more consistently. By using a DynamoDB stream, you can trigger a Lambda function to process data changes in a DynamoDB table. However, this does not guarantee that all data changes will be processed in order, or that no duplicates will occur. Therefore, this solution may result in inconsistent or incorrect data in your DynamoDB table. The best solution is A, because it will eliminate the throttling and load the data into the DynamoDB table more consistently.
upvoted 3 times
...
mgonblan
5 months, 1 week ago
I vote B, because refactoring the lambdas (A or D) could help, but it doesn't help the DynamoDB tables, C would give reserved concurrency to the lambda and improves performance, but it doesn't help with DynamoDB layer. So the best option is B) Because you stablish autoscaling and configure cloudwatch to monitor which RCU and WCU must use for the table.
upvoted 1 times
...
FunkyFresco
5 months, 1 week ago
Selected Answer: D
Option D.
upvoted 1 times
...
loctong
5 months, 3 weeks ago
Selected Answer: D
D is true
upvoted 1 times
...
loctong
5 months, 3 weeks ago
Selected Answer: D
To eliminate throttling and load the data into the DynamoDB table more consistently, you can refactor the Lambda function into two functions and utilize DynamoDB streams.
upvoted 1 times
...
Question #98 Topic 1

A developer is creating an AWS Lambda function in VPC mode. An Amazon S3 event will invoke the Lambda function when an object is uploaded into an S3 bucket. The Lambda function will process the object and produce some analytic results that will be recorded into a file. Each processed object will also generate a log entry that will be recorded into a file.

Other Lambda functions, AWS services, and on-premises resources must have access to the result files and log file. Each log entry must also be appended to the same shared log file. The developer needs a solution that can share files and append results into an existing file.

Which solution should the developer use to meet these requirements?

  • A. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system in Lambda. Store the result files and log file in the mount point. Append the log entries to the log file.
  • B. Create an Amazon Elastic Block Store (Amazon EBS) Multi-Attach enabled volume. Attach the EBS volume to all Lambda functions. Update the Lambda function code to download the log file, append the log entries, and upload the modified log file to Amazon EBS.
  • C. Create a reference to the /tmp local directory. Store the result files and log file by using the directory reference. Append the log entry to the log file.
  • D. Create a reference to the /opt storage directory. Store the result files and log file by using the directory reference. Append the log entry to the log file.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Dushank
1 month, 4 weeks ago
Selected Answer: A
The requirement is to have a shared file system that allows for appending to files and can be accessed by multiple Lambda functions, AWS services, and on-premises resources. Amazon Elastic File System (Amazon EFS) is a good fit for these requirements. EFS provides a scalable and elastic NFS file system which can be mounted to multiple EC2 instances and Lambda functions at the same time, making it easier for these resources to share files. You can also append to existing files on an EFS file system, which meets the requirement for a shared log file that can have new entries appended to it.
upvoted 4 times
...
mgonblan
5 months, 1 week ago
A) There are several references for this: https://docs.aws.amazon.com/lambda/latest/operatorguide/networking-vpc.html and this blog entry: https://aws.amazon.com/es/blogs/compute/choosing-between-aws-lambda-data-storage-options-in-web-apps/
upvoted 1 times
...
delak
5 months, 2 weeks ago
Selected Answer: A
shared files == EFS
upvoted 3 times
...
loctong
5 months, 3 weeks ago
Selected Answer: A
EFS is true
upvoted 2 times
...
Question #99 Topic 1

A company has an AWS Lambda function that processes incoming requests from an Amazon API Gateway API. The API calls the Lambda function by using a Lambda alias. A developer updated the Lambda function code to handle more details related to the incoming requests. The developer wants to deploy the new Lambda function for more testing by other developers with no impact to customers that use the API.

Which solution will meet these requirements with the LEAST operational overhead?

  • A. Create a new version of the Lambda function. Create a new stage on API Gateway with integration to the new Lambda version. Use the new API Gateway stage to test the Lambda function.
  • B. Update the existing Lambda alias used by API Gateway to a weighted alias. Add the new Lambda version as an additional Lambda function with a weight of 10%. Use the existing API Gateway stage for testing.
  • C. Create a new version of the Lambda function. Create and deploy a second Lambda function to filter incoming requests from API Gateway. If the filtering Lambda function detects a test request, the filtering Lambda function will invoke the new Lambda version of the code. For other requests, the filtering Lambda function will invoke the old Lambda version. Update the API Gateway API to use the filtering Lambda function.
  • D. Create a new version of the Lambda function. Create a new API Gateway API for testing purposes. Update the integration of the new API with the new Lambda version. Use the new API for testing.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
NaghamAbdellatif
1 month, 2 weeks ago
Why not B? There is canary testing in Lambda Functions
upvoted 1 times
Cerakoted
1 month ago
Cuz of it -> new Lambda function for more testing by other developers with no impact to customers that use the API.
upvoted 2 times
...
...
jayvarma
3 months ago
There is no need for us to create an all-new API Gateway in order to test the newer version of lambda. As a newer version of the lambda function is deployed with the necessary changes, a new stage of the API Gateway can be used ot test the changes of the lambda function.
upvoted 2 times
jayvarma
3 months ago
So A is the right option
upvoted 2 times
...
...
jipark
3 months ago
Selected Answer: A
A : create new API stage (add stage) - correct D: crew new API Gateway (create new one) - incorrect
upvoted 2 times
...
MrPie
4 months ago
Selected Answer: A
A is correct. Why the "correct answer" is always wrong? What's the point?
upvoted 3 times
JamalDaBoss
3 months ago
I agree, very stupid
upvoted 1 times
...
...
FunkyFresco
4 months ago
Selected Answer: A
A is ok according to my perspective.
upvoted 1 times
...
loctong
5 months, 3 weeks ago
Selected Answer: A
A's true
upvoted 1 times
...
delak
5 months, 3 weeks ago
Selected Answer: A
A is true
upvoted 1 times
...
rlnd2000
5 months, 3 weeks ago
Selected Answer: A
In my perspective, A is the correct answer and a pretty typical pattern; I'm not sure why C was chosen, but testing in production is not a smart practice.
upvoted 1 times
...
chumji
5 months, 3 weeks ago
The answer is A
upvoted 3 times
...
Question #100 Topic 1

A company uses AWS Lambda functions and an Amazon S3 trigger to process images into an S3 bucket. A development team set up multiple environments in a single AWS account.

After a recent production deployment, the development team observed that the development S3 buckets invoked the production environment Lambda functions. These invocations caused unwanted execution of development S3 files by using production Lambda functions. The development team must prevent these invocations. The team must follow security best practices.

Which solution will meet these requirements?

  • A. Update the Lambda execution role for the production Lambda function to add a policy that allows the execution role to read from only the production environment S3 bucket.
  • B. Move the development and production environments into separate AWS accounts. Add a resource policy to each Lambda function to allow only S3 buckets that are within the same account to invoke the function.
  • C. Add a resource policy to the production Lambda function to allow only the production environment S3 bucket to invoke the function.
  • D. Move the development and production environments into separate AWS accounts. Update the Lambda execution role for each function to add a policy that allows the execution role to read from the S3 bucket that is within the same account.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
B (52%)
C (45%)
3%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
AgboolaKun
Highly Voted 5 months, 3 weeks ago
Selected Answer: C
B is a wrong answer because I do not understand the need to move the environments to separate AWS accounts. The resource policy in the production environment can be used to control which S3 bucket invokes the function. In my understanding, the answer choice C fulfills the security best practices requirement in the question.
upvoted 14 times
MrPie
4 months ago
It's a best practice: Best Practices: Separate workloads using accounts: Establish common guardrails and isolation between environments (such as production, development, and test) and workloads through a multi-account strategy. Account-level separation is strongly recommended, as it provides a strong isolation boundary for security, billing, and access. https://wa.aws.amazon.com/wat.question.SEC_1.en.html
upvoted 8 times
...
jipark
3 months ago
resource policy totally fulfill requirement
upvoted 3 times
...
...
csG13
Highly Voted 5 months ago
Selected Answer: B
I choose B because it says that the team should follow the best security practices. AWS well-architected framework suggests separation. For reference see the link below: https://wa.aws.amazon.com/wat.question.SEC_1.en.html
upvoted 11 times
...
Rameez1
Most Recent 2 weeks, 6 days ago
Selected Answer: B
Moving the Dev and Prod environments to separate Accounts will make them totally isolated with cross account Lambda invocations. Whereas in Option C though Prod Lambda won't trigger with Dev S3 bucket Event, Dev Lambda may still get mistakenly invoked by Prod S3 Bucket event and perform unwanted actions.
upvoted 2 times
...
Nagasoracle
2 weeks, 6 days ago
Selected Answer: B
Sorry it is B As it mentions to follow security practice
upvoted 1 times
Chicote
1 week, 4 days ago
COMO CHINGAS
upvoted 1 times
...
...
Nagasoracle
2 weeks, 6 days ago
Selected Answer: A
Answer : A As it mentions to follow best security practice
upvoted 1 times
...
Millie024
1 month, 2 weeks ago
B seems to be the correct one https://docs.aws.amazon.com/wellarchitected/latest/framework/sec_securely_operate_multi_accounts.html Establish common guardrails and isolation between environments (such as production, development, and test) and workloads through a multi-account strategy. Account-level separation is strongly recommended, as it provides a strong isolation boundary for security, billing, and access.
upvoted 1 times
...
fossil123
2 months, 1 week ago
Selected Answer: C
C meets the contextual security requirements.
upvoted 1 times
...
stilloneway
2 months, 2 weeks ago
Selected Answer: B
See the question, in terms of "Security best practices", Answer is B. It could be C for 2nd option if separate AWS account is not possible.
upvoted 1 times
...
love777
2 months, 2 weeks ago
C. Add a resource policy to the production Lambda function to allow only the production environment S3 bucket to invoke the function. Explanation: In this scenario, the goal is to prevent unwanted invocations of production Lambda functions by development S3 buckets. Adding a resource policy directly to the production Lambda function that restricts invocations to only the production S3 bucket ensures that the function is only invoked by the intended bucket. ChatGPT
upvoted 2 times
...
loctong
5 months, 3 weeks ago
Selected Answer: B
chatgpt said
upvoted 1 times
...
junrun3
5 months, 3 weeks ago
Selected Answer: B
Answer is B
upvoted 1 times
...
Question #101 Topic 1

A developer is creating an application. New users of the application must be able to create an account and register by using their own social media accounts.

Which AWS service or resource should the developer use to meet these requirements?

  • A. IAM role
  • B. Amazon Cognito identity pools
  • C. Amazon Cognito user pools
  • D. AWS Directory Service
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (82%)
B (18%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
HuiHsin
Highly Voted 5 months ago
Selected Answer: C
https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools.html
upvoted 7 times
...
Cloud_Cloud
Highly Voted 6 months, 2 weeks ago
Selected Answer: C
https://medium.com/wolox/integrating-social-media-to-your-app-with-aws-cognito-8943329aa89b
upvoted 5 times
...
Bhatfield
Most Recent 1 month, 1 week ago
Amazon Cognito user pools provide user identity management and authentication for your application. They allow you to create and maintain a user directory, and you can enable social identity providers like Facebook, Google, or Amazon to allow users to register and log in using their social media accounts. This service is specifically designed for user management and authentication scenarios like the one described. Option B, "Amazon Cognito identity pools," is more focused on providing temporary AWS credentials for users to access AWS services securely after they have been authenticated through a user pool.
upvoted 3 times
...
Dushank
1 month, 4 weeks ago
Selected Answer: C
For creating an application where new users can create accounts and register using their social media accounts, Amazon Cognito is the most suitable service. Specifically, you'd want to use Amazon Cognito User Pools. Amazon Cognito User Pools support sign-ins using social identity providers like Facebook, Google, and Amazon, as well as enterprise identity providers via SAML 2.0. With a user pool, you can create a fully managed user directory to enable user sign-up and sign-in, as well as handle password recovery, user verification, and other user management tasks.
upvoted 2 times
...
Dushank
1 month, 4 weeks ago
The answer is (B). Amazon Cognito identity pools is a managed service that provides user sign-in and identity management for your web and mobile applications. It supports social sign-in with a variety of providers, including Amazon, Facebook, Google, and Twitter.
upvoted 1 times
...
hanJR
6 months, 1 week ago
Selected Answer: C
You can't register using Identity Pool. It lets you authenticate with provided identification pools.
upvoted 4 times
...
MrTee
6 months, 2 weeks ago
Selected Answer: B
Key word is registration using their social media accounts
upvoted 4 times
rlnd2000
5 months, 3 weeks ago
Using Cognito identity pools you can get the token and access AWS using social media accounts, BUT you can't create an account, in this case we need Cognito user pools.
upvoted 1 times
...
awsdummie
6 months ago
B is incorrect. https://www.youtube.com/watch?v=9pvygKIuCpI
upvoted 1 times
...
...
Question #102 Topic 1

A social media application uses the AWS SDK for JavaScript on the frontend to get user credentials from AWS Security Token Service (AWS STS). The application stores its assets in an Amazon S3 bucket. The application serves its content by using an Amazon CloudFront distribution with the origin set to the S3 bucket.

The credentials for the role that the application assumes to make the SDK calls are stored in plaintext in a JSON file within the application code. The developer needs to implement a solution that will allow the application to get user credentials without having any credentials hardcoded in the application code.

Which solution will meet these requirements?

  • A. Add a Lambda@Edge function to the distribution. Invoke the function on viewer request. Add permissions to the function's execution role to allow the function to access AWS STS. Move all SDK calls from the frontend into the function.
  • B. Add a CloudFront function to the distribution. Invoke the function on viewer request. Add permissions to the function's execution role to allow the function to access AWS STS. Move all SDK calls from the frontend into the function.
  • C. Add a Lambda@Edge function to the distribution. Invoke the function on viewer request. Move the credentials from the JSON file into the function. Move all SDK calls from the frontend into the function.
  • D. Add a CloudFront function to the distribution. Invoke the function on viewer request. Move the credentials from the JSON file into the function. Move all SDK calls from the frontend into the function.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (76%)
B (24%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
csG13
Highly Voted 5 months ago
Selected Answer: A
The answer is A. Here is a reference directly from AWS docs: "If you need some of the capabilities of Lambda@Edge that are not available with CloudFront Functions, such as network access or a longer execution time, you can still use Lambda@Edge before and after content is cached by CloudFront." Since the requirement is to access the STS service, network access is required. Therefore, it can't be Cloudfront functions. Also, as a side note it's worth to mention that Cloudfront functions can only execute for up to 1ms. Apparently this isn't enough to fetch user creds (tokens) from STS. The table in the following link summarises the differences between Cloudfront functions and Lambda@edge https://aws.amazon.com/blogs/aws/introducing-cloudfront-functions-run-your-code-at-the-edge-with-low-latency-at-any-scale/
upvoted 7 times
...
Baba_Eni
Most Recent 2 months ago
Selected Answer: A
I will go for A, check the link below, Cloudfront functions are just within Cloudfront, hence, they DONT HAVE NETWORK ACCESS. Network access is required to make a call to AWS STS. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/edge-functions.html
upvoted 1 times
...
MG1407
2 months, 3 weeks ago
The answer is B. I was in agreement with csG13 until a further research into the JavaScript SDK and STS. Found the following: https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-cloudfront/classes/stsclient.html. Since the question states Js SDK and STS the answer is B.
upvoted 1 times
...
FunkyFresco
5 months, 1 week ago
Selected Answer: A
Option A.
upvoted 1 times
...
zodraz
6 months ago
Selected Answer: A
https://www.examtopics.com/discussions/amazon/view/89838-exam-aws-certified-developer-associate-topic-1-question-361/
upvoted 2 times
...
vic614
6 months, 1 week ago
Selected Answer: A
Cloud front function doesn't have network access, it has to be lambda @ edge I l
upvoted 2 times
...
MrTee
6 months, 2 weeks ago
Selected Answer: B
The difference between A and B is the SDK for Javascript in use here; Lambda@Edge functions can be written in a variety of programming languages, including Node.js, Python, and Java, while CloudFront functions are written in JavaScript.
upvoted 4 times
Cloud_Cloud
6 months, 2 weeks ago
Now one problem is lambda function can not perform AWS STS command
upvoted 1 times
eboehm
4 months, 3 weeks ago
After rereading the last part of the question. It doesnt mention that it must remain written in Javascript, but does seem using AWS STS is a requirement so I think I would stick with A being the answer
upvoted 1 times
...
...
...
Question #103 Topic 1

An ecommerce website uses an AWS Lambda function and an Amazon RDS for MySQL database for an order fulfillment service. The service needs to return order confirmation immediately.

During a marketing campaign that caused an increase in the number of orders, the website's operations team noticed errors for “too many connections” from Amazon RDS. However, the RDS DB cluster metrics are healthy. CPU and memory capacity are still available.

What should a developer do to resolve the errors?

  • A. Initialize the database connection outside the handler function. Increase the max_user_connections value on the parameter group of the DB cluster. Restart the DB cluster.
  • B. Initialize the database connection outside the handler function. Use RDS Proxy instead of connecting directly to the DB cluster.
  • C. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues to queue the orders. Ingest the orders into the database. Set the Lambda function's concurrency to a value that equals the number of available database connections.
  • D. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues to queue the orders. Ingest the orders into the database. Set the Lambda function's concurrency to a value that is less than the number of available database connections.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: B
Use an RDS Proxy instead of connecting directly to the DB cluster.
upvoted 7 times
...
hmdev
Most Recent 2 months, 2 weeks ago
Selected Answer: B
We can use an RDS proxy to handle a lot of connections. We are choosing this option because the load on the RDS is normal. If the RDS was unable to handle loads, we would've checked other options like queues or transactions.
upvoted 2 times
...
eberhe900
4 months ago
Selected Answer: B
https://repost.aws/questions/QULXSqEPGbQx6qiyBa1D1Udg/lambda-to-db-connectivity-best-practices
upvoted 1 times
...
loctong
5 months, 2 weeks ago
Selected Answer: B
Using an RDS Proxy can manage connections to the RDS instance, reducing the overhead of establishing new connections and thereby preventing the "too many connections" error.
upvoted 2 times
...
hanJR
6 months, 1 week ago
B https://aws.amazon.com/blogs/compute/using-amazon-rds-proxy-with-aws-lambda/
upvoted 4 times
...
Question #104 Topic 1

A company stores its data in data tables in a series of Amazon S3 buckets. The company received an alert that customer credit card information might have been exposed in a data table on one of the company's public applications. A developer needs to identify all potential exposures within the application environment.

Which solution will meet these requirements?

  • A. Use Amazon Athena to run a job on the S3 buckets that contain the affected data. Filter the findings by using the SensitiveData:S3Object/Personal finding type.
  • B. Use Amazon Macie to run a job on the S3 buckets that contain the affected data. Filter the findings by using the SensitiveData:S3Object/Financial finding type.
  • C. Use Amazon Macie to run a job on the S3 buckets that contain the affected data. Filter the findings by using the SensitiveData:S3Object/Personal finding type.
  • D. Use Amazon Athena to run a job on the S3 buckets that contain the affected data. Filter the findings by using the SensitiveData:S3Object/Financial finding type.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: B
Use Amazon Macie to run a job on the S3 buckets that contain the affected data. Filter the findings by using the SensitiveData:S3Object/Financial finding type. Option A and D suggest using Amazon Athena, which is an interactive query service that can be used to analyze data stored in S3 using standard SQL queries. While Athena can help identify data in S3 buckets, it does not provide the same level of automated scanning and pattern matching that Amazon Macie does. Option C is incorrect because the SensitiveData:S3Object/Personal finding type is designed to identify personally identifiable information (PII), such as names and addresses, but not credit card information.
upvoted 10 times
...
Baba_Eni
Most Recent 4 months, 3 weeks ago
Selected Answer: B
https://docs.aws.amazon.com/macie/latest/user/findings-types.html
upvoted 2 times
...
HuiHsin
5 months ago
Selected Answer: B
https://docs.aws.amazon.com/zh_tw/macie/latest/user/findings-types.html
upvoted 1 times
...
Prem28
5 months, 3 weeks ago
Selected Answer: B
The best solution to identify all potential exposures within the application environment after receiving an alert that customer credit card information might have been exposed in a data table on one of the company's public applications is to use Amazon Macie. Amazon Macie is a fully managed data security and privacy service that uses machine learning and pattern matching to discover and protect sensitive data in AWS.
upvoted 1 times
...
Question #105 Topic 1

A software company is launching a multimedia application. The application will allow guest users to access sample content before the users decide if they want to create an account to gain full access. The company wants to implement an authentication process that can identify users who have already created an account. The company also needs to keep track of the number of guest users who eventually create an account.

Which combination of steps will meet these requirements? (Choose two.)

  • A. Create an Amazon Cognito user pool. Configure the user pool to allow unauthenticated users. Exchange user tokens for temporary credentials that allow authenticated users to assume a role.
  • B. Create an Amazon Cognito identity pool. Configure the identity pool to allow unauthenticated users. Exchange unique identity for temporary credentials that allow all users to assume a role.
  • C. Create an Amazon CloudFront distribution. Configure the distribution to allow unauthenticated users. Exchange user tokens for temporary credentials that allow all users to assume a role.
  • D. Create a role for authenticated users that allows access to all content. Create a role for unauthenticated users that allows access to only the sample content.
  • E. Allow all users to access the sample content by default. Create a role for authenticated users that allows access to the other content.
Reveal Solution Hide Solution

Correct Answer: BE 🗳️

Community vote distribution
BD (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: BD
option B because by configuring the identity pool to allow unauthenticated users, you can enable guest users to access the sample content. When users create an account, they can be authenticated, and then given access to the full content by assuming a role that allows them access. Option D is correct because creating roles for authenticated and unauthenticated users with different levels of access is an appropriate way to meet the requirement of identifying users who have created an account and keeping track of the number of guest users who eventually create an account.
upvoted 14 times
...
jipark
Most Recent 3 months ago
Selected Answer: BD
"who alreaady created account" means User Pool not required. - NOT A
upvoted 2 times
...
Question #106 Topic 1

A company is updating an application to move the backend of the application from Amazon EC2 instances to a serverless model. The application uses an Amazon RDS for MySQL DB instance and runs in a single VPC on AWS. The application and the DB instance are deployed in a private subnet in the VPC.

The company needs to connect AWS Lambda functions to the DB instance.

Which solution will meet these requirements?

  • A. Create Lambda functions inside the VPC with the AWSLambdaBasicExecutionRole policy attached to the Lambda execution role. Modify the RDS security group to allow inbound access from the Lambda security group.
  • B. Create Lambda functions inside the VPC with the AWSLambdaVPCAccessExecutionRole policy attached to the Lambda execution role. Modify the RDS security group to allow inbound access from the Lambda security group.
  • C. Create Lambda functions with the AWSLambdaBasicExecutionRole policy attached to the Lambda execution role. Create an interface VPC endpoint for the Lambda functions. Configure the interface endpoint policy to allow the lambda:InvokeFunclion action for each Lambda function's Amazon Resource Name (ARN).
  • D. Create Lambda functions with the AWSLambdaVPCAccessExecutionRole policy attached to the Lambda execution role. Create an interface VPC endpoint for the Lambda functions. Configure the interface endpoint policy to allow the lambda:InvokeFunction action for each Lambda function's Amazon Resource Name (ARN).
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (79%)
D (21%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: B
The AWSLambdaVPCAccessExecutionRole policy allows the Lambda function to create elastic network interfaces (ENIs) in the VPC and use the security groups attached to those ENIs for controlling inbound and outbound traffic.
upvoted 9 times
...
Nagasoracle
Most Recent 2 weeks, 6 days ago
Selected Answer: D
Answer : D
upvoted 1 times
...
love777
2 months, 1 week ago
Selected Answer: D
While Lambda functions cannot run directly in private subnets, they can be configured to access resources within a VPC by creating a VPC endpoint for Lambda. AWS Lambda supports VPC Endpoints for Lambda, which allow Lambda functions to securely access resources within a VPC without needing to traverse the public internet. You should attach the AWSLambdaVPCAccessExecutionRole policy to your Lambda execution role to enable it to create network interfaces in your VPC for accessing resources. By configuring an interface VPC endpoint for Lambda, you can enable the Lambda function to communicate with resources within the private subnet and the RDS instance.
upvoted 2 times
...
Baba_Eni
4 months, 3 weeks ago
Selected Answer: B
https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSLambdaVPCAccessExecutionRole.html https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html
upvoted 2 times
...
Prem28
5 months ago
ans- opt d Option A does not allow Lambda functions to access resources in the VPC. Option B does not create an interface VPC endpoint, which means that Lambda functions will be exposed to the public internet. Option C does not configure the interface endpoint policy to allow the lambda:InvokeFunction action, which means that Lambda functions will not be able to invoke each other.
upvoted 2 times
jipark
3 months ago
I definitely agree. Lambda cannot be installed inside VPC, instead, AWSLambdaVPCAccessExectutionRole allow to connect via ENI.
upvoted 1 times
...
...
Question #107 Topic 1

A company has a web application that runs on Amazon EC2 instances with a custom Amazon Machine Image (AMI). The company uses AWS CloudFormation to provision the application. The application runs in the us-east-1 Region, and the company needs to deploy the application to the us-west-1 Region.

An attempt to create the AWS CloudFormation stack in us-west-1 fails. An error message states that the AMI ID does not exist. A developer must resolve this error with a solution that uses the least amount of operational overhead.

Which solution meets these requirements?

  • A. Change the AWS CloudFormation templates for us-east-1 and us-west-1 to use an AWS AMI. Relaunch the stack for both Regions.
  • B. Copy the custom AMI from us-east-1 to us-west-1. Update the AWS CloudFormation template for us-west-1 to refer to AMI ID for the copied AMI. Relaunch the stack.
  • C. Build the custom AMI in us-west-1. Create a new AWS CloudFormation template to launch the stack in us-west-1 with the new AMI ID.
  • D. Manually deploy the application outside AWS CloudFormation in us-west-1.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: B
This will allow the company to deploy the application to the us-west-1 Region using the same custom AMI that is used in the us-east-1 Region.
upvoted 8 times
...
gomurali
Most Recent 4 months, 1 week ago
https://www.examtopics.com/discussions/amazon/view/78848-exam-aws-certified-developer-associate-topic-1-question-118/
upvoted 2 times
...
Question #108 Topic 1

A developer is updating several AWS Lambda functions and notices that all the Lambda functions share the same custom libraries. The developer wants to centralize all the libraries, update the libraries in a convenient way, and keep the libraries versioned.

Which solution will meet these requirements with the LEAST development effort?

  • A. Create an AWS CodeArtifact repository that contains all the custom libraries.
  • B. Create a custom container image for the Lambda functions to save all the custom libraries.
  • C. Create a Lambda layer that contains all the custom libraries.
  • D. Create an Amazon Elastic File System (Amazon EFS) file system to store all the custom libraries.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: C
the most efficient solution is to use a Lambda layer to store the common libraries, update them in one place, and reference them from each Lambda function that requires them.
upvoted 12 times
...
HuiHsin
Most Recent 5 months ago
Selected Answer: C
The Lambda layer of option C provides a simpler solution without the need to introduce an additional CodeArtifact service.
upvoted 1 times
...
loctong
5 months, 3 weeks ago
Selected Answer: C
Lambda layers are a distribution mechanism for libraries, custom runtimes, and other function dependencies in AWS Lambda. By creating a Lambda layer, you can package and centrally manage the shared custom libraries for the Lambda functions.
upvoted 1 times
...
loctong
5 months, 3 weeks ago
Selected Answer: C
It should be Create a Lambda layer.
upvoted 1 times
...
Ryan1002
6 months ago
Why not CodeArtifact? "CodeArtifact allows you to store artifacts using popular package managers and build tools like Maven, Gradle, npm, Yarn, Twine, pip, and NuGet. CodeArtifact can automatically fetch software packages on demand from public package repositories so you can access the latest versions of application dependencies."
upvoted 2 times
jipark
3 months ago
"LEAST development effort"
upvoted 1 times
...
...
Question #109 Topic 1

A developer wants to use AWS Elastic Beanstalk to test a new version of an application in a test environment.

Which deployment method offers the FASTEST deployment?

  • A. Immutable
  • B. Rolling
  • C. Rolling with additional batch
  • D. All at once
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
loctong
Highly Voted 5 months, 3 weeks ago
Selected Answer: D
The "All at once" deployment method deploys the new version of the application to all instances simultaneously. It updates all instances of the environment in a short period of time, resulting in the fastest overall deployment.
upvoted 5 times
...
yeacuz
Highly Voted 5 months, 3 weeks ago
Selected Answer: D
The answer is D. "All at once – The quickest deployment method." https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-existing-version.html
upvoted 5 times
...
Question #110 Topic 1

A company is providing read access to objects in an Amazon S3 bucket for different customers. The company uses IAM permissions to restrict access to the S3 bucket. The customers can access only their own files.

Due to a regulation requirement, the company needs to enforce encryption in transit for interactions with Amazon S3.

Which solution will meet these requirements?

  • A. Add a bucket policy to the S3 bucket to deny S3 actions when the aws:SecureTransport condition is equal to false.
  • B. Add a bucket policy to the S3 bucket to deny S3 actions when the s3:x-amz-acl condition is equal to public-read.
  • C. Add an IAM policy to the IAM users to enforce the usage of the AWS SDK.
  • D. Add an IAM policy to the IAM users that allows S3 actions when the s3:x-amz-acl condition is equal to bucket-owner-read.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: A
This solution enforces encryption in transit for interactions with Amazon S3 by denying access to the S3 bucket if the request is not made over an HTTPS connection. This condition can be enforced by using the "aws:SecureTransport" condition key in a bucket policy.
upvoted 14 times
jipark
3 months ago
'in transit' = SSL Secure Transport
upvoted 2 times
...
...
loctong
Most Recent 5 months, 3 weeks ago
Selected Answer: A
To enforce encryption in transit for interactions with Amazon S3, you can add a bucket policy to the S3 bucket that denies S3 actions when the aws:SecureTransport condition is equal to false. This condition checks whether the requests to S3 are made over a secure (HTTPS) connection.
upvoted 3 times
...
rlnd2000
5 months, 3 weeks ago
Selected Answer: A
https://repost.aws/knowledge-center/s3-bucket-policy-for-config-rule
upvoted 2 times
...
Question #111 Topic 1

A company has an image storage web application that runs on AWS. The company hosts the application on Amazon EC2 instances in an Auto Scaling group. The Auto Scaling group acts as the target group for an Application Load Balancer (ALB) and uses an Amazon S3 bucket to store the images for sale.

The company wants to develop a feature to test system requests. The feature will direct requests to a separate target group that hosts a new beta version of the application.

Which solution will meet this requirement with the LEAST effort?

  • A. Create a new Auto Scaling group and target group for the beta version of the application. Update the ALB routing rule with a condition that looks for a cookie named version that has a value of beta. Update the test system code to use this cookie to test the beta version of the application.
  • B. Create a new ALB, Auto Scaling group, and target group for the beta version of the application. Configure an alternate Amazon Route 53 record for the new ALB endpoint. Use the alternate Route 53 endpoint in the test system requests to test the beta version of the application.
  • C. Create a new ALB, Auto Scaling group, and target group for the beta version of the application. Use Amazon CloudFront with Lambda@Edge to determine which specific request will go to the new ALB. Use the CloudFront endpoint to send the test system requests to test the beta version of the application.
  • D. Create a new Auto Scaling group and target group for the beta version of the application. Update the ALB routing rule with a condition that looks for a cookie named version that has a value of beta. Use Amazon CloudFront with Lambda@Edge to update the test system requests to add the required cookie when the requests go to the ALB.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
A (57%)
B (43%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: A
This solution will allow the company to direct requests to a separate target group that hosts the new beta version of the application without having to create a new ALB or use additional services such as Amazon Route 53 or Amazon CloudFront. Option D adds additional complexity and effort compared to option A, which simply involves updating the ALB routing rule with a condition that looks for a cookie named version that has a value of beta and updating the test system code to use this cookie to test the beta version of the application.
upvoted 13 times
...
Nagasoracle
Most Recent 2 weeks, 6 days ago
Selected Answer: B
Considering Least effort
upvoted 1 times
...
LemonGremlin
2 weeks, 6 days ago
Selected Answer: A
Agree that this is A
upvoted 1 times
...
Rameez1
3 weeks, 2 days ago
Selected Answer: A
Option A serves the requirement with least efforts.
upvoted 1 times
...
nnecode
1 month, 1 week ago
Selected Answer: B
Which solution will meet this requirement with the LEAST effort? Updating code will be more effort, hence B is the correct answer.
upvoted 2 times
...
backfringe
3 months, 1 week ago
Selected Answer: B
Option B provides the simplest and least effort solution to test the beta version of the application. By creating a new ALB, Auto Scaling group, and target group for the beta version, the company can deploy the new version of the application separately from the production version. Configuring an alternate Amazon Route 53 record for the new ALB endpoint allows the test system requests to be directed to the beta version.
upvoted 4 times
...
eboehm
4 months, 3 weeks ago
Selected Answer: B
im going to go with B as well since updating code is way more labor intensive than creating a new route entry
upvoted 4 times
...
yeacuz
5 months, 3 weeks ago
Selected Answer: A
Option A is the least effort. With option B, you have to additionally create a new ALB *and* also a new route 53 record. With option A, you can create a new listener based on HTTP header: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/listener-update-rules.html and it will fulfill the requirements. You will also need a new auto scaling group and target group with option A - but you also need this with option B as well, so option A is the least effort.
upvoted 2 times
...
junrun3
5 months, 3 weeks ago
Selected Answer: B
The question is: "Which solution meets this requirement with the least amount of effort?" The question is: Which solution meets this requirement with the least amount of effort? The answer is B. A is more labor intensive to implement because it requires updating the ALB routing rules and the test system code needs to be updated.
upvoted 2 times
...
Question #112 Topic 1

A team is developing an application that is deployed on Amazon EC2 instances. During testing, the team receives an error. The EC2 instances are unable to access an Amazon S3 bucket.

Which steps should the team take to troubleshoot this issue? (Choose two.)

  • A. Check whether the policy that is assigned to the IAM role that is attached to the EC2 instances grants access to Amazon S3.
  • B. Check the S3 bucket policy to validate the access permissions for the S3 bucket.
  • C. Check whether the policy that is assigned to the IAM user that is attached to the EC2 instances grants access to Amazon S3.
  • D. Check the S3 Lifecycle policy to validate the permissions that are assigned to the S3 bucket.
  • E. Check the security groups that are assigned to the EC2 instances. Make sure that a rule is not blocking the access to Amazon S3.
Reveal Solution Hide Solution

Correct Answer: D E 🗳️

Community vote distribution
AB (89%)
11%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: AB
Option A is correct because IAM roles are used to grant permissions to AWS services, such as EC2 instances, to access other AWS services, such as S3 buckets. The policy assigned to the IAM role attached to the EC2 instances should be checked to ensure that it grants access to the S3 bucket. Option B is also correct because the S3 bucket policy controls access to the S3 bucket. The S3 bucket policy should be checked to ensure that the access permissions are correctly configured.
upvoted 12 times
...
Nagasoracle
Most Recent 2 weeks, 6 days ago
Selected Answer: AB
https://repost.aws/knowledge-center/ec2-instance-access-s3-bucket
upvoted 1 times
...
love777
2 months, 2 weeks ago
Selected Answer: AE
Explanation: A. IAM Role Policy: EC2 instances are typically associated with IAM roles. These roles have policies attached to them that define the permissions the instances have. If the instances are unable to access an S3 bucket, it's essential to verify that the IAM role assigned to the EC2 instances has the necessary permissions to interact with S3. E. Security Groups: Security groups act as virtual firewalls for EC2 instances. They control inbound and outbound traffic. If the EC2 instances are unable to access S3, it's possible that the associated security group is blocking outbound traffic to the S3 service. Make sure the security group rules allow outbound traffic to the S3 service.
upvoted 2 times
...
love777
2 months, 2 weeks ago
The correct steps to troubleshoot the issue are: A. Check whether the policy that is assigned to the IAM role that is attached to the EC2 instances grants access to Amazon S3. E. Check the security groups that are assigned to the EC2 instances. Make sure that a rule is not blocking the access to Amazon S3. Explanation: E. Security Groups: Security groups act as virtual firewalls for EC2 instances. They control inbound and outbound traffic. If the EC2 instances are unable to access S3, it's possible that the associated security group is blocking outbound traffic to the S3 service. Make sure the security group rules allow outbound traffic to the S3 service.
upvoted 2 times
...
awsazedevsh
4 months ago
Why not E ?
upvoted 2 times
remynick
2 months, 3 weeks ago
access to S3 is controlled by IAM, not security groups.
upvoted 3 times
...
...
indirasubbaraj
4 months, 3 weeks ago
AB https://repost.aws/knowledge-center/ec2-instance-access-s3-bucket
upvoted 1 times
...
Prem28
5 months ago
AE B. Check the S3 bucket policy to validate the access permissions for the S3 bucket. The S3 bucket policy controls who has access to the bucket, but it does not control how they can access it. The IAM role or user that is attached to the EC2 instances must have the appropriate permissions to access the bucket, regardless of what the S3 bucket policy says. C. Check whether the policy that is assigned to the IAM user that is attached to the EC2 instances grants access to Amazon S3. This is unlikely to be the cause of the issue, as the IAM role is what is typically used to control access to AWS resources. D. Check the S3 Lifecycle policy to validate the permissions that are assigned to the S3 bucket. The S3 Lifecycle policy controls how objects are stored and moved in Amazon S3. It does not control who has access to the bucket.
upvoted 1 times
...
vic614
6 months, 1 week ago
Selected Answer: AB
A: Make sure EC2 instance profile has permission to access s3 B: Make sure S3 resource policy allows the access from instance
upvoted 3 times
...
Question #113 Topic 1

A developer is working on an ecommerce website. The developer wants to review server logs without logging in to each of the application servers individually. The website runs on multiple Amazon EC2 instances, is written in Python, and needs to be highly available.

How can the developer update the application to meet these requirements with MINIMUM changes?

  • A. Rewrite the application to be cloud native and to run on AWS Lambda, where the logs can be reviewed in Amazon CloudWatch.
  • B. Set up centralized logging by using Amazon OpenSearch Service, Logstash, and OpenSearch Dashboards.
  • C. Scale down the application to one larger EC2 instance where only one instance is recording logs.
  • D. Install the unified Amazon CloudWatch agent on the EC2 instances. Configure the agent to push the application logs to CloudWatch.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: D
Option D is the best option because it requires minimum changes and leverages the existing infrastructure.
upvoted 9 times
...
loctong
Most Recent 5 months, 3 weeks ago
Selected Answer: D
By installing the Amazon CloudWatch agent on the EC2 instances, the developer can easily collect and send logs from each instance to Amazon CloudWatch. The CloudWatch agent provides a unified way to collect logs, system-level metrics, and custom metrics from the EC2 instances.
upvoted 2 times
...
Question #114 Topic 1

A company is creating an application that processes .csv files from Amazon S3. A developer has created an S3 bucket. The developer has also created an AWS Lambda function to process the .csv files from the S3 bucket.

Which combination of steps will invoke the Lambda function when a .csv file is uploaded to Amazon S3? (Choose two.)

  • A. Create an Amazon EventBridge rule. Configure the rule with a pattern to match the S3 object created event.
  • B. Schedule an Amazon EventBridge rule to run a new Lambda function to scan the S3 bucket.
  • C. Add a trigger to the existing Lambda function. Set the trigger type to EventBridge. Select the Amazon EventBridge rule.
  • D. Create a new Lambda function to scan the S3 bucket for recently added S3 objects.
  • E. Add S3 Lifecycle rules to invoke the existing Lambda function.
Reveal Solution Hide Solution

Correct Answer: BD 🗳️

Community vote distribution
AC (94%)
6%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: AC
Option A is correct because an Amazon EventBridge rule can be created to detect when an object is created in an S3 bucket. The rule should be configured with a pattern to match the S3 object created event. Option C is correct because the existing Lambda function can be updated with an EventBridge trigger. The trigger type should be set to EventBridge, and the Amazon EventBridge rule created in step A should be selected.
upvoted 14 times
...
Nagasoracle
Most Recent 2 weeks, 6 days ago
Selected Answer: AC
AC is combination of steps required
upvoted 1 times
...
Jing2023
4 weeks ago
Why not just use the S3 event as the trigger directly.
upvoted 1 times
...
Naj_64
3 months, 3 weeks ago
Selected Answer: AC
A C for sure
upvoted 2 times
...
loctong
5 months, 1 week ago
Selected Answer: AB
A, B are correctly
upvoted 1 times
...
Question #115 Topic 1

A developer needs to build an AWS CloudFormation template that self-populates the AWS Region variable that deploys the CloudFormation template.

What is the MOST operationally efficient way to determine the Region in which the template is being deployed?

  • A. Use the AWS::Region pseudo parameter.
  • B. Require the Region as a CloudFormation parameter.
  • C. Find the Region from the AWS::StackId pseudo parameter by using the Fn::Split intrinsic function.
  • D. Dynamically import the Region by referencing the relevant parameter in AWS Systems Manager Parameter Store.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: A
A. Use the AWS::Region pseudo parameter.
upvoted 9 times
...
Baba_Eni
Most Recent 4 months, 3 weeks ago
Selected Answer: A
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/pseudo-parameter-reference.html
upvoted 1 times
...
Baba_Eni
4 months, 3 weeks ago
Selected Answer: A
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/pseudo-parameter-reference.htmlhttps://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/pseudo-parameter-reference.html
upvoted 1 times
...
loctong
5 months, 3 weeks ago
Selected Answer: A
The AWS::Region pseudo parameter is a built-in CloudFormation parameter that automatically resolves to the AWS Region where the CloudFormation stack is being created. By using this pseudo parameter, you can dynamically access the current Region without requiring any additional configuration or input.
upvoted 2 times
...
Question #116 Topic 1

A company has hundreds of AWS Lambda functions that the company's QA team needs to test by using the Lambda function URLs. A developer needs to configure the authentication of the Lambda functions to allow access so that the QA IAM group can invoke the Lambda functions by using the public URLs.

Which solution will meet these requirements?

  • A. Create a CLI script that loops on the Lambda functions to add a Lambda function URL with the AWS_IAM auth type. Run another script to create an IAM identity-based policy that allows the lambda:InvokeFunctionUrl action to all the Lambda function Amazon Resource Names (ARNs). Attach the policy to the QA IAM group.
  • B. Create a CLI script that loops on the Lambda functions to add a Lambda function URL with the NONE auth type. Run another script to create an IAM resource-based policy that allows the lambda:InvokeFunctionUrl action to all the Lambda function Amazon Resource Names (ARNs). Attach the policy to the QA IAM group.
  • C. Create a CLI script that loops on the Lambda functions to add a Lambda function URL with the AWS_IAM auth type. Run another script to loop on the Lambda functions to create an IAM identity-based policy that allows the lambda:InvokeFunctionUrl action from the QA IAM group's Amazon Resource Name (ARN).
  • D. Create a CLI script that loops on the Lambda functions to add a Lambda function URL with the NONE auth type. Run another script to loop on the Lambda functions to create an IAM resource-based policy that allows the lambda:InvokeFunctionUrl action from the QA IAM group's Amazon Resource Name (ARN).
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (79%)
C (21%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: A
Option A meets these requirements?
upvoted 11 times
jipark
3 months ago
create 'AWS_IAM auth type' -> Attach the policy to the QA IAM group
upvoted 2 times
...
ppardav
4 months, 2 weeks ago
https://docs.aws.amazon.com/lambda/latest/dg/urls-auth.html
upvoted 1 times
...
...
love777
Most Recent 2 months, 2 weeks ago
Selected Answer: C
Explanation: In this scenario, the QA team needs to test AWS Lambda functions using Lambda function URLs while ensuring proper authentication and access control. Here's why option C is the appropriate solution: Authentication Type: Using the AWS_IAM auth type for the Lambda function URLs ensures that the Lambda functions can be invoked only by users and roles that have the necessary IAM permissions. Identity-Based Policy: By creating an IAM identity-based policy, you grant permissions directly to the QA IAM group to invoke the Lambda functions using the Lambda function URLs. This provides fine-grained control over which IAM entities can access the functions. Option A uses the AWS_IAM auth type and creates a policy for the QA IAM group, which is a good direction. However, the creation of a policy that allows lambda:InvokeFunctionUrl for all Lambda function ARNs might grant excessive permissions.
upvoted 3 times
dezoito
3 weeks, 2 days ago
Why A grant excessive permissions? The policy will contain only the Lambda's ARNs wich the QA group should have access to.
upvoted 1 times
...
...
Question #117 Topic 1

A developer maintains a critical business application that uses Amazon DynamoDB as the primary data store. The DynamoDB table contains millions of documents and receives 30-60 requests each minute. The developer needs to perform processing in near-real time on the documents when they are added or updated in the DynamoDB table.

How can the developer implement this feature with the LEAST amount of change to the existing application code?

  • A. Set up a cron job on an Amazon EC2 instance. Run a script every hour to query the table for changes and process the documents.
  • B. Enable a DynamoDB stream on the table. Invoke an AWS Lambda function to process the documents.
  • C. Update the application to send a PutEvents request to Amazon EventBridge. Create an EventBridge rule to invoke an AWS Lambda function to process the documents.
  • D. Update the application to synchronously process the documents directly after the DynamoDB write.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: B
Option B is the best solution because it proposes enabling a DynamoDB stream on the table, which allows the developer to capture document-level changes in near-real time without modifying the application code. Then, the stream can be configured to invoke an AWS Lambda function to process the documents in near-real time. This solution requires minimal changes to the existing application code, and the Lambda function can be developed and deployed separately, enabling the developer to easily maintain and update it as needed.
upvoted 8 times
...
loctong
Most Recent 5 months, 3 weeks ago
Selected Answer: B
To implement near-real-time processing on documents added or updated in a DynamoDB table with the least amount of change to the existing application code, the developer should: B. Enable a DynamoDB stream on the table and invoke an AWS Lambda function to process the documents. Enabling a DynamoDB stream on the table allows capturing and processing of the changes made to the table in near-real-time. The stream provides an ordered sequence of item-level modifications (inserts, updates, and deletes) that can be consumed by other AWS services, such as AWS Lambda.
upvoted 4 times
...
Question #118 Topic 1

A developer is writing an application for a company. The application will be deployed on Amazon EC2 and will use an Amazon RDS for Microsoft SQL Server database. The company's security team requires that database credentials are rotated at least weekly.

How should the developer configure the database credentials for this application?

  • A. Create a database user. Store the user name and password in an AWS Systems Manager Parameter Store secure string parameter. Enable rotation of the AWS Key Management Service (AWS KMS) key that is used to encrypt the parameter.
  • B. Enable IAM authentication for the database. Create a database user for use with IAM authentication. Enable password rotation.
  • C. Create a database user. Store the user name and password in an AWS Secrets Manager secret that has daily rotation enabled.
  • D. Use the EC2 user data to create a database user. Provide the user name and password in environment variables to the application.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: C
option C: Create a database user. Store the user name and password in an AWS Secrets Manager secret that has daily rotation enabled. This will allow the developer to securely store the database credentials and automatically rotate them at least weekly to meet the company’s security requirements.
upvoted 11 times
...
jipark
Most Recent 3 months ago
Selected Answer: C
rotation key & cross account key is feature of Secret Manager https://tutorialsdojo.com/aws-secrets-manager-vs-systems-manager-parameter-store/
upvoted 2 times
...
Baba_Eni
4 months, 3 weeks ago
Selected Answer: C
https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_turn-on-for-other.html https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_schedule.html
upvoted 3 times
...
loctong
5 months, 3 weeks ago
Selected Answer: C
the keyword is "rotation"
upvoted 4 times
...
Question #119 Topic 1

A real-time messaging application uses Amazon API Gateway WebSocket APIs with backend HTTP service. A developer needs to build a feature in the application to identify a client that keeps connecting to and disconnecting from the WebSocket connection. The developer also needs the ability to remove the client.

Which combination of changes should the developer make to the application to meet these requirements? (Choose two.)

  • A. Switch to HTTP APIs in the backend service.
  • B. Switch to REST APIs in the backend service.
  • C. Use the callback URL to disconnect the client from the backend service.
  • D. Add code to track the client status in Amazon ElastiCache in the backend service.
  • E. Implement $connect and $disconnect routes in the backend service.
Reveal Solution Hide Solution

Correct Answer: CD 🗳️

Community vote distribution
CE (56%)
DE (44%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: DE
Option D because by storing the client status in the cache, the backend service can quickly access the client status data without the need to query the database or perform other time-consuming operations. Option E. Implement $connect and $disconnect routes in the backend service: $connect and $disconnect are the reserved routes in WebSocket APIs, which are automatically called by API Gateway whenever a client connects or disconnects from the WebSocket. By implementing these routes in the backend service, the developer can track and manage the client status, including identifying and removing the client when needed.
upvoted 13 times
...
catcatpunch
Highly Voted 5 months, 1 week ago
Selected Answer: CE
C => https://docs.aws.amazon.com/ko_kr/apigateway/latest/developerguide/apigateway-how-to-call-websocket-api-connections.html E => https://docs.aws.amazon.com/ko_kr/apigateway/latest/developerguide/apigateway-websocket-api-route-keys-connect-disconnect.html
upvoted 7 times
...
Balliache520505
Most Recent 1 month, 3 weeks ago
Selected Answer: CE
I go with C and E. https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-websocket-api-route-keys-connect-disconnect.html https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-how-to-call-websocket-api-connections.html
upvoted 1 times
...
love777
2 months, 2 weeks ago
Selected Answer: DE
D. Tracking Client Status: To identify and manage clients that connect and disconnect from the WebSocket connection, you need a way to persist this information. Amazon ElastiCache is a managed in-memory caching service that can be used to store this kind of data. By adding code to your backend service to track client status in ElastiCache, you can keep a record of client connections and disconnections. E. connectanddisconnect Routes: In API Gateway WebSocket APIs, the connectanddisconnect routes are special routes that are automatically triggered when a client connects and disconnects from the WebSocket connection. By implementing these routes in your backend service, you can capture the client information and update the client status in the ElastiCache, thus achieving the requirement of identifying clients and managing their connections.
upvoted 3 times
...
Phongsanth
4 months, 1 week ago
Selected Answer: CE
Option C and E is my preferable choice. why do we have to use option D in case we apply $connect and $disconnect already in option E ? https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-how-to-call-websocket-api-connections.html https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-websocket-api-route-keys-connect-disconnect.html
upvoted 4 times
...
delak
5 months, 2 weeks ago
Selected Answer: CE
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-how-to-call-websocket-api-connections.html
upvoted 4 times
...
loctong
5 months, 3 weeks ago
Selected Answer: CE
Implementing a callback URL allows the backend service to initiate disconnection from the WebSocket connection.
upvoted 4 times
...
Question #120 Topic 1

A developer has written code for an application and wants to share it with other developers on the team to receive feedback. The shared application code needs to be stored long-term with multiple versions and batch change tracking.

Which AWS service should the developer use?

  • A. AWS CodeBuild
  • B. Amazon S3
  • C. AWS CodeCommit
  • D. AWS Cloud9
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: C
option C, AWS CodeCommit.
upvoted 5 times
...
loctong
Most Recent 5 months, 3 weeks ago
Selected Answer: C
must be C
upvoted 3 times
...
delak
5 months, 3 weeks ago
it's C
upvoted 2 times
...
Question #121 Topic 1

A company's developer is building a static website to be deployed in Amazon S3 for a production environment. The website integrates with an Amazon Aurora PostgreSQL database by using an AWS Lambda function. The website that is deployed to production will use a Lambda alias that points to a specific version of the Lambda function.

The company must rotate the database credentials every 2 weeks. Lambda functions that the company deployed previously must be able to use the most recent credentials.

Which solution will meet these requirements?

  • A. Store the database credentials in AWS Secrets Manager. Turn on rotation. Write code in the Lambda function to retrieve the credentials from Secrets Manager.
  • B. Include the database credentials as part of the Lambda function code. Update the credentials periodically and deploy the new Lambda function.
  • C. Use Lambda environment variables. Update the environment variables when new credentials are available.
  • D. Store the database credentials in AWS Systems Manager Parameter Store. Turn on rotation. Write code in the Lambda function to retrieve the credentials from Systems Manager Parameter Store.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: A
Option A is the correct solution; Option D is also a valid solution, but it is not the best option since Secrets Manager provides built-in rotation, which ensures that the latest credentials are automatically updated. Additionally, AWS Systems Manager Parameter Store does not provide the ability to rotate secrets automatically.
upvoted 9 times
...
loctong
Most Recent 5 months, 3 weeks ago
Selected Answer: A
the key word is "rotation"
upvoted 4 times
...
Question #122 Topic 1

A developer is developing an application that uses signed requests (Signature Version 4) to call other AWS services. The developer has created a canonical request, has created the string to sign, and has calculated signing information.

Which methods could the developer use to complete a signed request? (Choose two.)

  • A. Add the signature to an HTTP header that is named Authorization.
  • B. Add the signature to a session cookie.
  • C. Add the signature to an HTTP header that is named Authentication.
  • D. Add the signature to a query string parameter that is named X-Amz-Signature.
  • E. Add the signature to an HTTP header that is named WWW-Authenticate.
Reveal Solution Hide Solution

Correct Answer: AD 🗳️

Community vote distribution
AD (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: AD
the correct options are A and D.
upvoted 7 times
...
vicvega
Most Recent 4 months ago
Header: Authorization: AWS4-HMAC-SHA256 Credential=AKIAIOSFODNN7EXAMPLE/20220830/us-east-1/ec2/aws4_request, SignedHeaders=host;x-amz-date, Signature=calculated-signature Query String: https://ec2.amazonaws.com/? Action=DescribeInstances& Version=2016-11-15& X-Amz-Signature=calculated-signature https://docs.aws.amazon.com/IAM/latest/UserGuide/create-signed-request.html
upvoted 3 times
...
loctong
5 months, 3 weeks ago
Selected Answer: AD
Option B,C And E are not correct;
upvoted 1 times
...
awsdummie
6 months ago
Selected Answer: AD
https://docs.aws.amazon.com/IAM/latest/UserGuide/create-signed-request.html
upvoted 2 times
...
Question #123 Topic 1

A company must deploy all its Amazon RDS DB instances by using AWS CloudFormation templates as part of AWS CodePipeline continuous integration and continuous delivery (CI/CD) automation. The primary password for the DB instance must be automatically generated as part of the deployment process.

Which solution will meet these requirements with the LEAST development effort?

  • A. Create an AWS Lambda-backed CloudFormation custom resource. Write Lambda code that generates a secure string. Return the value of the secure string as a data field of the custom resource response object. Use the CloudFormation Fn::GetAtt intrinsic function to get the value of the secure string. Use the value to create the DB instance.
  • B. Use the AWS CodeBuild action of CodePipeline to generate a secure string by using the following AWS CLI command: aws secretsmanager get-random-password. Pass the generated secure string as a CloudFormation parameter with the NoEcho attribute set to true. Use the parameter reference to create the DB instance.
  • C. Create an AWS Lambda-backed CloudFormation custom resource. Write Lambda code that generates a secure string. Return the value of the secure string as a data field of the custom resource response object. Use the CloudFormation Fn::GetAtt intrinsic function to get a value of the secure string. Create secrets in AWS Secrets Manager. Use the secretsmanager dynamic reference to use the value stored in the secret to create the DB instance.
  • D. Use the AWS::SecretsManager::Secret resource to generate a secure string. Store the secure string as a secret in AWS Secrets Manager. Use the secretsmanager dynamic reference to use the value stored in the secret to create the DB instance.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
D (77%)
B (23%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dezoito
3 weeks, 2 days ago
Selected Answer: D
With AWS CloudFormation, you can retrieve a secret to use in another AWS CloudFormation resource. A common scenario is to first create a secret with a password generated by Secrets Manager, and then retrieve the username and password from the secret to use as credentials for a new database. https://docs.aws.amazon.com/secretsmanager/latest/userguide/cfn-example_reference-secret.html
upvoted 2 times
...
love777
2 months, 2 weeks ago
Selected Answer: B
Option B provides a straightforward approach to generating a secure string for the DB instance password and using it in CloudFormation with minimal development effort. Here's why this option is efficient: CodeBuild Action: Using the AWS CodeBuild action within CodePipeline to generate a secure string using the aws secretsmanager get-random-password command allows you to easily create a random password without writing custom Lambda code. CloudFormation Parameter: You can pass the generated secure string as a CloudFormation parameter with the NoEcho attribute set to true. This ensures that the parameter value won't be exposed in CloudFormation outputs or logs.
upvoted 3 times
...
FunkyFresco
5 months, 1 week ago
Selected Answer: D
The correct option is D. Create the password from secrets manager.
upvoted 4 times
...
delak
5 months, 2 weeks ago
Selected Answer: D
yes it's D
upvoted 2 times
...
rlnd2000
5 months, 3 weeks ago
Selected Answer: D
The answer is D This is a secretsmanager dynamic reference sample in cloud formation
upvoted 2 times
...
chumji
5 months, 4 weeks ago
I think answer is D https://aws.amazon.com/about-aws/whats-new/2022/12/amazon-rds-integration-aws-secrets-manager/
upvoted 2 times
...
MrTee
6 months, 2 weeks ago
Its a difficult choice between B and D Option B leverages the existing AWS CLI command to generate a secure string, and then passes it as a parameter to CloudFormation, where it can be used to create the DB instance. But, if the use of Secrets Manager is already part of the organization's infrastructure, and the setup has already been completed, then option D may indeed be the simplest solution.
upvoted 3 times
...
Question #124 Topic 1

An organization is storing large files in Amazon S3, and is writing a web application to display meta-data about the files to end-users. Based on the metadata a user selects an object to download. The organization needs a mechanism to index the files and provide single-digit millisecond latency retrieval for the metadata.

What AWS service should be used to accomplish this?

  • A. Amazon DynamoDB
  • B. Amazon EC2
  • C. AWS Lambda
  • D. Amazon RDS
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: A
In this scenario, the metadata about the files can be stored in a DynamoDB table with a primary key based on the metadata attributes. This would enable the organization to quickly query and retrieve metadata about the files in real-time, with single-digit millisecond latency.
upvoted 8 times
...
loctong
Most Recent 5 months, 3 weeks ago
Selected Answer: A
Amazon DynamoDB is a highly scalable and fully managed NoSQL database service that can provide fast and consistent performance at any scale. It is a suitable choice for indexing and storing metadata associated with files.
upvoted 3 times
...
Question #125 Topic 1

A developer is creating an AWS Serverless Application Model (AWS SAM) template. The AWS SAM template contains the definition of multiple AWS Lambda functions, an Amazon S3 bucket, and an Amazon CloudFront distribution. One of the Lambda functions runs on Lambda@Edge in the CloudFront distribution. The S3 bucket is configured as an origin for the CloudFront distribution.

When the developer deploys the AWS SAM template in the eu-west-1 Region, the creation of the stack fails.

Which of the following could be the reason for this issue?

  • A. CloudFront distributions can be created only in the us-east-1 Region.
  • B. Lambda@Edge functions can be created only in the us-east-1 Region.
  • C. A single AWS SAM template cannot contain multiple Lambda functions.
  • D. The CloudFront distribution and the S3 bucket cannot be created in the same Region.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
B (94%)
6%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: B
it must be deployed to a region where Lambda@Edge is supported, such as us-east-1.
upvoted 10 times
...
zodraz
Highly Voted 6 months ago
Selected Answer: B
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/edge-functions-restrictions.html The Lambda function must be in the US East (N. Virginia) Region.
upvoted 6 times
...
tinyflame
Most Recent 3 months ago
Selected Answer: B
SAM can only specify one region Langda@Edge only in us-east1 region https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-how-it-works-tutorial.html
upvoted 1 times
...
loctong
5 months, 3 weeks ago
Selected Answer: C
Option A states that CloudFront distributions can only be created in the us-east-1 Region. This statement is incorrect because CloudFront distributions can be created in various AWS regions, including the eu-west-1 Region.
upvoted 1 times
...
Question #126 Topic 1

A developer is integrating Amazon ElastiCache in an application. The cache will store data from a database. The cached data must populate real-time dashboards.

Which caching strategy will meet these requirements?

  • A. A read-through cache
  • B. A write-behind cache
  • C. A lazy-loading cache
  • D. A write-through cache
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: D
The best caching strategy for populating real-time dashboards using Amazon ElastiCache would be a write-through caching strategy. In this strategy, when new data is written to the database, it is also written to the cache. This ensures that the most current data is always available in the cache for the real-time dashboards to access, reducing the latency of the data retrieval. Additionally, using a write-through cache ensures that data consistency is maintained between the database and the cache, as any changes to the data are written to both locations simultaneously.
upvoted 10 times
...
Prem28
Most Recent 5 months ago
ans- A Option D, a write-through cache, is incorrect because it would not meet the requirement of populating real-time dashboards. A write-through cache writes data to the cache and the database at the same time. This means that the data in the cache would always be up-to-date, but it would also mean that the cache would always be lagging behind the database. This would cause a delay in populating real-time dashboards.
upvoted 1 times
...
loctong
5 months, 3 weeks ago
Selected Answer: D
A write-through cache strategy involves writing data to both the cache and the underlying database simultaneously. When data is updated or inserted into the database, it is also stored or updated in the cache to ensure that the cache remains up-to-date with the latest data.
upvoted 2 times
...
Question #127 Topic 1

A developer is creating an AWS Lambda function. The Lambda function needs an external library to connect to a third-party solution. The external library is a collection of files with a total size of 100 MB. The developer needs to make the external library available to the Lambda execution environment and reduce the Lambda package space.

Which solution will meet these requirements with the LEAST operational overhead?

  • A. Create a Lambda layer to store the external library. Configure the Lambda function to use the layer.
  • B. Create an Amazon S3 bucket. Upload the external library into the S3 bucket. Mount the S3 bucket folder in the Lambda function. Import the library by using the proper folder in the mount point.
  • C. Load the external library to the Lambda function's /tmp directory during deployment of the Lambda package. Import the library from the /tmp directory.
  • D. Create an Amazon Elastic File System (Amazon EFS) volume. Upload the external library to the EFS volume. Mount the EFS volume in the Lambda function. Import the library by using the proper folder in the mount point.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: A
Create a Lambda layer to store the external library. Configure the Lambda function to use the layer. This will allow the developer to make the external library available to the Lambda execution environment without having to include it in the Lambda package, which will reduce the Lambda package space. Using a Lambda layer is a simple and straightforward solution that requires minimal operational overhead.
upvoted 10 times
...
loctong
Most Recent 5 months, 3 weeks ago
Selected Answer: A
By creating a Lambda layer, you can separate the external library from the Lambda function code itself and make it available to multiple functions. This approach offers the following benefits:
upvoted 2 times
...
dan80
6 months, 1 week ago
Selected Answer: A
https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html
upvoted 3 times
...
Question #128 Topic 1

A company has a front-end application that runs on four Amazon EC2 instances behind an Elastic Load Balancer (ELB) in a production environment that is provisioned by AWS Elastic Beanstalk. A developer needs to deploy and test new application code while updating the Elastic Beanstalk platform from the current version to a newer version of Node.js. The solution must result in zero downtime for the application.

Which solution meets these requirements?

  • A. Clone the production environment to a different platform version. Deploy the new application code, and test it. Swap the environment URLs upon verification.
  • B. Deploy the new application code in an all-at-once deployment to the existing EC2 instances. Test the code. Redeploy the previous code if verification fails.
  • C. Perform an immutable update to deploy the new application code to new EC2 instances. Serve traffic to the new instances after they pass health checks.
  • D. Use a rolling deployment for the new application code. Apply the code to a subset of EC2 instances until the tests pass. Redeploy the previous code if the tests fail.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
C (51%)
A (34%)
14%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MrTee
Highly Voted 6 months, 2 weeks ago
Selected Answer: C
Option C is the correct solution that meets the requirements. Performing an immutable update to deploy the new application code to new EC2 instances and serving traffic to the new instances after they pass health checks will ensure zero downtime for the application. Option A would work but cloning the production environment to a different platform version will result in a longer deployment time and can impact the cost of the environment.
upvoted 12 times
yeacuz
5 months, 3 weeks ago
I would agree that option A can affect the cost, but cost is not the issue. The question is asking for zero downtime. I believe the answer is option A
upvoted 1 times
...
awsdummie
6 months ago
C is incorrect, after passing health checks the elastic Beanstalk transfers them to the original Auto Scaling group. No testing or platform update is done.
upvoted 4 times
...
...
Rameez1
Most Recent 3 weeks, 2 days ago
Selected Answer: C
A & C both works for given scenario but C does it more feasibly for Elastic Beanstalk with zero downtime.
upvoted 1 times
...
stilloneway
2 months, 1 week ago
Selected Answer: C
Key terminology in question is "Test". So it should be immutable for quick rollback in case of test not working.
upvoted 2 times
...
love777
2 months, 2 weeks ago
Selected Answer: C
Explanation: Immutable Update with Elastic Beanstalk: With an immutable update, Elastic Beanstalk provisions new instances with the updated code while keeping the existing instances running. The traffic is shifted gradually to the new instances after they pass health checks, ensuring that there is no downtime during the deployment. If any issue arises during the deployment, traffic is still being served by the existing instances.
upvoted 3 times
...
Naj_64
2 months, 2 weeks ago
Selected Answer: D
Screenshot of Step 4 of Method 1 in the link: https://docs.amazonaws.cn/en_us/elasticbeanstalk/latest/dg/using-features.platform.upgrade.html#using-features.platform.upgrade.config "...your application is unavailable during the update. To keep at least one instance in service during the update, enable rolling updates"
upvoted 1 times
Naj_64
2 months, 2 weeks ago
I take this back. I'm going with A "However, you can avoid this downtime by deploying the new version to a separate environment. The existing environment’s configuration is copied and used to launch the green environment with the new version of the application. The new green environment will have its own URL. When it’s time to promote the green environment to serve production traffic, you can use Elastic Beanstalk's Swap Environment URLs feature." https://docs.aws.amazon.com/whitepapers/latest/blue-green-deployments/swap-the-environment-of-an-elastic-beanstalk-application.html
upvoted 1 times
...
...
MG1407
2 months, 3 weeks ago
Selected Answer: A
A is the answer. Sorry about the double post ... https://docs.amazonaws.cn/en_us/elasticbeanstalk/latest/dg/using-features.platform.upgrade.html#using-features.platform.upgrade.config
upvoted 2 times
...
MG1407
2 months, 3 weeks ago
Selected Answer: D
Can't be clearer than this ... https://docs.amazonaws.cn/en_us/elasticbeanstalk/latest/dg/using-features.platform.upgrade.html#using-features.platform.upgrade.config
upvoted 1 times
...
redfivedog
3 months, 1 week ago
Selected Answer: A
A is the correct solution here. From https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMESwap.html, "A blue/green deployment is also required if you want to update an environment to an incompatible platform version.". An immutable deployment would ensure zero downtime, but the new instances launched would have the same platform version as before.
upvoted 1 times
...
bobo777
3 months, 2 weeks ago
Selected Answer: A
A developer also needs to update to a new platform version and it's more likely a new major version of node.js. To update to the new major version there is only one method and it is a blue/green deployment by creating (cloning) a new environment with the latest platform version. Then deploy a new app version to it. Test it, then swap the env URL without downtime.
upvoted 2 times
...
Phongsanth
4 months, 1 week ago
Selected Answer: D
On the step 4 of Method 1 in the link. you will see it clearly that rolling update is perfect fit with this question. Of course with zero downtime. https://docs.amazonaws.cn/en_us/elasticbeanstalk/latest/dg/using-features.platform.upgrade.html#using-features.platform.upgrade.config
upvoted 2 times
Naj_64
2 months, 2 weeks ago
+1 "...your application is unavailable during the update. To keep at least one instance in service during the update, enable rolling updates"
upvoted 1 times
...
...
gagol14
4 months, 2 weeks ago
Selected Answer: A
Not C: While an immutable update can ensure zero downtime during the deployment process, it doesn't account for updating the Elastic Beanstalk platform version.
upvoted 4 times
...
yeacuz
5 months, 3 weeks ago
Selected Answer: A
Option A is referring to Blue/Green deployments and will fulfill the requirements of the question (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMESwap.html)
upvoted 3 times
...
loctong
5 months, 3 weeks ago
Selected Answer: D
Performing an immutable update involves creating new EC2 instances with the updated code and the newer version of Node.js, and then swapping the traffic to the new instances once they pass health checks. This approach ensures zero downtime as the existing instances continue to serve traffic until the new instances are ready.
upvoted 1 times
...
awsdummie
6 months ago
Option A
upvoted 4 times
...
MrTee
6 months, 2 weeks ago
Option B and D both involve deploying the new application code to the existing EC2 instances, which can result in downtime if the deployment fails. Redeploying the previous code after a failed deployment can also result in downtime.
upvoted 2 times
qwan
4 months ago
Option D states " Apply the code to a subset of EC2 instances until the tests pass". Subset, not all EC2 instances. So, if deployment fails, you still have some EC2 instances running the old application code. So, no downtime.
upvoted 1 times
...
...
Question #129 Topic 1

A developer is creating an AWS Lambda function. The Lambda function will consume messages from an Amazon Simple Queue Service (Amazon SQS) queue. The developer wants to integrate unit testing as part of the function's continuous integration and continuous delivery (CI/CD) process.

How can the developer unit test the function?

  • A. Create an AWS CloudFormation template that creates an SQS queue and deploys the Lambda function. Create a stack from the template during the CI/CD process. Invoke the deployed function. Verify the output.
  • B. Create an SQS event for tests. Use a test that consumes messages from the SQS queue during the function's Cl/CD process.
  • C. Create an SQS queue for tests. Use this SQS queue in the application's unit test. Run the unit tests during the CI/CD process.
  • D. Use the aws lambda invoke command with a test event during the CIICD process.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
C (38%)
B (33%)
D (29%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
gagol14
Highly Voted 4 months, 2 weeks ago
Selected Answer: C
Unit testing is a type of testing that verifies the correctness of individual units of source code, typically functions or methods. When unit testing a Lambda function that interacts with Amazon SQS, you can create a separate test SQS queue that the Lambda function interacts with during testing. You would then validate the behavior of the function based on its interactions with the test queue. This approach isolates the function's behavior from the rest of the system, which is a key principle of unit testing. Option A is incorrect because AWS CloudFormation is typically used for infrastructure deployment, not for unit testing. Option B is incorrect because it does not actually test the function; it only creates an event. Option D is incorrect because the 'aws lambda invoke' command is used to manually trigger a Lambda function, but doesn't necessarily facilitate testing the function's behavior when consuming messages from an SQS queue.
upvoted 7 times
...
redfivedog
Highly Voted 3 months, 1 week ago
Selected Answer: D
D is correct here. Both B and C are integration tests as they are using an actual SQS queue in the tests and not mocking it out.
upvoted 5 times
...
dilleman
Most Recent 3 weeks, 4 days ago
Selected Answer: D
Option D is the only true unit test.
upvoted 1 times
...
love777
2 months, 2 weeks ago
Selected Answer: B
Explanation: Option B involves simulating the SQS event trigger for testing purposes. This is a common practice in AWS Lambda unit testing. Here's how it works: SQS Event for Tests: In your unit test code, you can create an SQS event object that simulates the event structure that Lambda receives when an SQS message is consumed. This event object will contain the necessary information, such as the message content, message attributes, etc. Testing Logic: You can then pass this event object to your Lambda function's handler function as if it were an actual SQS event. This allows you to test your Lambda function's logic as it would work in response to an SQS message. Mocking Dependencies: During unit testing, you might want to mock any AWS service calls, such as SQS, to isolate your Lambda function's logic from external services.
upvoted 4 times
...
r3mo
3 months, 2 weeks ago
Option B! Offers a practical and efficient way to unit test an AWS Lambda function consuming messages from an SQS queue. It provides an accurate representation of the actual event source, simplifies the testing process, integrates well with CI/CD pipelines, isolates production resources, and is cost-effective.
upvoted 2 times
...
nguyenta
3 months, 3 weeks ago
Selected Answer: D
D, from Google Bard
upvoted 1 times
...
vicvega
4 months ago
The idea of creating permanent, persistent AWS resources for a test that might take 3 seconds is an anti-pattern. During a CI/CD pipeline, resources should be spun up, used, and then torn down. Nothing should hang around after a CI/CD pipeline runs. Does that not negate B and C?
upvoted 3 times
...
Phongsanth
4 months, 1 week ago
Selected Answer: C
I vote C. Unit test should be isolated. Check out in this link. https://aws.amazon.com/blogs/devops/unit-testing-aws-lambda-with-python-and-mock-aws-services/
upvoted 2 times
...
hexie
4 months, 1 week ago
Selected Answer: B
B. And before explaining it I would like to ask you guys to use ChatGPT if you want, but don't take it as a source of truth and either use it's answers here, where people usually come to read USEFUL stuff and understand correctly what it's all about. Moderators should review those votes before approving it lol B option is ONE approach for unit testing AWS Lambda functions, since it involves creating a mock SQS event and passing it to the function to be tested. This will allow the function behavior to be tested in isolation, which is the aim of unit testing. :) C option is more like a integration test, not a unit test. That's all. :)
upvoted 4 times
...
patrick889
4 months, 3 weeks ago
chatGPT said C is correct
upvoted 2 times
...
Question #130 Topic 1

A developer is working on a web application that uses Amazon DynamoDB as its data store. The application has two DynamoDB tables: one table that is named artists and one table that is named songs. The artists table has artistName as the partition key. The songs table has songName as the partition key and artistName as the sort key.

The table usage patterns include the retrieval of multiple songs and artists in a single database operation from the webpage. The developer needs a way to retrieve this information with minimal network traffic and optimal application performance.

Which solution will meet these requirements?

  • A. Perform a BatchGetltem operation that returns items from the two tables. Use the list of songName/artistName keys for the songs table and the list of artistName key for the artists table.
  • B. Create a local secondary index (LSI) on the songs table that uses artistName as the partition key. Perform a query operation for each artistName on the songs table that filters by the list of songName. Perform a query operation for each artistName on the artists table.
  • C. Perform a BatchGetitem operation on the songs table that uses the songName/artistName keys. Perform a BatchGetltem operation on the artists table that uses artistName as the key.
  • D. Perform a Scan operation on each table that filters by the list of songName/artistName for the songs table and the list of artistName in the artists table.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (91%)
9%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
csG13
Highly Voted 4 months, 4 weeks ago
Selected Answer: A
The correct answer is A. BatchGetItem can return one or multiple items from one or more tables. For reference check the link below https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchGetItem.html
upvoted 6 times
...
norris81
Most Recent 1 month, 1 week ago
Selected Answer: A
https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchGetItem.html
upvoted 1 times
...
rlnd2000
3 months, 2 weeks ago
Selected Answer: B
Agree 100% with Caiyi.
upvoted 1 times
...
caiyi
4 months ago
B. By creating a local secondary index (LSI) on the songs table with artistName as the partition key, you can efficiently query the songs table for each artistName in the list of artists. This approach allows you to retrieve the desired songs for multiple artists with minimal network traffic.
upvoted 3 times
GripZA
2 months, 2 weeks ago
You can't create a LSI on an existing DDB table
upvoted 4 times
...
remynick
2 months, 3 weeks ago
I dont agree, we need to creat a global secondary index to use artistName as the partition ke
upvoted 2 times
...
...
Baba_Eni
4 months, 3 weeks ago
Selected Answer: A
https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchGetItem.html
upvoted 3 times
...
Question #131 Topic 1

A company is developing an ecommerce application that uses Amazon API Gateway APIs. The application uses AWS Lambda as a backend. The company needs to test the code in a dedicated, monitored test environment before the company releases the code to the production environment.

Which solution will meet these requirements?

  • A. Use a single stage in API Gateway. Create a Lambda function for each environment. Configure API clients to send a query parameter that indicates the environment and the specific Lambda function.
  • B. Use multiple stages in API Gateway. Create a single Lambda function for all environments. Add different code blocks for different environments in the Lambda function based on Lambda environment variables.
  • C. Use multiple stages in API Gateway. Create a Lambda function for each environment. Configure API Gateway stage variables to route traffic to the Lambda function in different environments.
  • D. Use a single stage in API Gateway. Configure API clients to send a query parameter that indicates the environment. Add different code blocks for different environments in the Lambda function to match the value of the query parameter.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
csG13
Highly Voted 4 months, 4 weeks ago
Selected Answer: C
The answer is C - we should create multiple stages and different Lambdas that will be utilised based on API Gateway stages variables. https://docs.aws.amazon.com/apigateway/latest/developerguide/amazon-api-gateway-using-stage-variables.html
upvoted 9 times
...
Question #132 Topic 1

A developer creates an AWS Lambda function that retrieves and groups data from several public API endpoints. The Lambda function has been updated and configured to connect to the private subnet of a VPC. An internet gateway is attached to the VPC. The VPC uses the default network ACL and security group configurations.

The developer finds that the Lambda function can no longer access the public API. The developer has ensured that the public API is accessible, but the Lambda function cannot connect to the API

How should the developer fix the connection issue?

  • A. Ensure that the network ACL allows outbound traffic to the public internet.
  • B. Ensure that the security group allows outbound traffic to the public internet.
  • C. Ensure that outbound traffic from the private subnet is routed to a public NAT gateway.
  • D. Ensure that outbound traffic from the private subnet is routed to a new internet gateway.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Dushank
1 month, 4 weeks ago
Selected Answer: C
When a Lambda function is configured to connect to a VPC, it loses its default internet access. To allow the Lambda function to access the public internet, it must be connected to a private subnet in the VPC that is configured to route its traffic through a NAT Gateway (Network Address Translation Gateway). The Internet Gateway is usually used to provide internet access to resources in the public subnet, but for resources in the private subnet, a NAT Gateway is required.
upvoted 3 times
...
Naj_64
2 months, 2 weeks ago
Selected Answer: C
NAT Gateway from a public subnet is required.
upvoted 1 times
...
cmonthatsme
3 months ago
Selected Answer: C
The Lambda function is running in a private subnet of the VPC, it needs to send outbound traffic to the internet to reach the API endpoints. To enable this, a NAT gateway is required.
upvoted 1 times
...
Parsons
3 months ago
Selected Answer: C
C is correct. with Lambda, You need an IP of NAT GW to be able to access public internet.
upvoted 1 times
...
cloudenthusiast
3 months ago
Selected Answer: C
it leverages a NAT gateway, which is a service that enables instances in a private subnet to connect to the internet or other AWS services, but prevents the internet from initiating a connection with those instances.
upvoted 2 times
...
Question #133 Topic 1

A developer needs to store configuration variables for an application. The developer needs to set an expiration date and time for the configuration. The developer wants to receive notifications before the configuration expires.

Which solution will meet these requirements with the LEAST operational overhead?

  • A. Create a standard parameter in AWS Systems Manager Parameter Store. Set Expiration and ExpirationNotification policy types.
  • B. Create a standard parameter in AWS Systems Manager Parameter Store. Create an AWS Lambda function to expire the configuration and to send Amazon Simple Notification Service (Amazon SNS) notifications.
  • C. Create an advanced parameter in AWS Systems Manager Parameter Store. Set Expiration and ExpirationNotification policy types.
  • D. Create an advanced parameter in AWS Systems Manager Parameter Store. Create an Amazon EC2 instance with a cron job to expire the configuration and to send notifications.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
C (82%)
Other

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Parsons
Highly Voted 3 months ago
Selected Answer: C
C is correct. You have to use "advanced parameter in AWS Systems Manager Parameter Store" to be able to Set Expiration and ExpirationNotification policy types.
upvoted 6 times
...
Rameez1
Most Recent 2 weeks, 3 days ago
Selected Answer: B
Using Lambda function and SNS will address the requirement with least operational overhead.
upvoted 1 times
Rameez1
2 weeks, 1 day ago
Changing my mind option A is correct here.
upvoted 1 times
...
...
Gold07
3 weeks, 4 days ago
A is the right Answer
upvoted 2 times
...
worseforwear
3 months ago
Selected Answer: C
You can't set expiration policy on standard parameter
upvoted 4 times
...
cmonthatsme
3 months ago
Selected Answer: A
By creating a standard parameter, you can set an expiration date for the parameter
upvoted 2 times
...
cloudenthusiast
3 months ago
Selected Answer: C
it leverages the advanced parameter tier and the parameter policies feature of Parameter Store, which meet the requirements with the least operational overhead.
upvoted 4 times
...
Question #134 Topic 1

A company is developing a serverless application that consists of various AWS Lambda functions behind Amazon API Gateway APIs. A developer needs to automate the deployment of Lambda function code. The developer will deploy updated Lambda functions with AWS CodeDeploy. The deployment must minimize the exposure of potential errors to end users. When the application is in production, the application cannot experience downtime outside the specified maintenance window.

Which deployment configuration will meet these requirements with the LEAST deployment time?

  • A. Use the AWS CodeDeploy in-place deployment configuration for the Lambda functions. Shift all traffic immediately after deployment.
  • B. Use the AWS CodeDeploy linear deployment configuration to shift 10% of the traffic every minute.
  • C. Use the AWS CodeDeploy all-at-once deployment configuration to shift all traffic to the updated versions immediately.
  • D. Use the AWS CodeDeploy predefined canary deployment configuration to shift 10% of the traffic immediately and shift the remaining traffic after 5 minutes.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
D (80%)
A (20%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
jingle4944
1 week, 6 days ago
Canary deployment is supported: https://aws.amazon.com/blogs/compute/implementing-safe-aws-lambda-deployments-with-aws-codedeploy/
upvoted 1 times
...
passhojaun
2 weeks, 5 days ago
Selected Answer: A
Canary is not supported in AWS CodeDeploy.
upvoted 1 times
Jaimoo
1 week, 6 days ago
https://aws.amazon.com/es/blogs/containers/aws-codedeploy-now-supports-linear-and-canary-deployments-for-amazon-ecs/
upvoted 2 times
...
...
passhojaun
2 weeks, 5 days ago
Canary is not supported in AWS CodeDeploy.
upvoted 1 times
...
Yuxing_Li
2 months, 1 week ago
Selected Answer: D
Canary is faster than linear in this case.
upvoted 2 times
...
love777
2 months, 2 weeks ago
Selected Answer: A
Explanation: In an AWS Lambda context, using the in-place deployment configuration minimizes deployment time and provides fast updates to the function's code. In this case, the application consists of AWS Lambda functions behind Amazon API Gateway APIs. With the in-place deployment configuration, all traffic is shifted to the updated versions of the Lambda functions immediately after deployment. Option B suggests a linear deployment configuration that shifts 10% of the traffic every minute. While this provides controlled deployment and gradual rollout, it might not be the fastest approach if you want to minimize deployment time. Option C suggests an all-at-once deployment configuration. While this configuration might be fast, it poses a higher risk of exposing potential errors to end users all at once.
upvoted 1 times
...
RaidenKurosaki
3 months ago
Selected Answer: D
Canary deployment
upvoted 2 times
...
Parsons
3 months ago
Selected Answer: D
D is correct. Keyword: -"must minimize the exposure of potential errors to end users", you just have to trade-off 10% of traffic - "cannot experience downtime ", eliminate C. - "LEAST deployment time", with B, You have to take 10 mins other than D just 5 min.
upvoted 3 times
...
cloudenthusiast
3 months ago
Selected Answer: D
the predefined canary deployment configuration, which shifts a small percentage of traffic to the updated versions immediately, and then shifts the remaining traffic after a specified period
upvoted 1 times
...
Question #135 Topic 1

A company created four AWS Lambda functions that connect to a relational database server that runs on an Amazon RDS instance. A security team requires the company to automatically change the database password every 30 days.

Which solution will meet these requirements MOST securely?

  • A. Store the database credentials in the environment variables of the Lambda function. Deploy the Lambda function with the new credentials every 30 days.
  • B. Store the database credentials in AWS Secrets Manager. Configure a 30-day rotation schedule for the credentials.
  • C. Store the database credentials in AWS Systems Manager Parameter Store secure strings. Configure a 30-day schedule for the secure strings.
  • D. Store the database credentials in an Amazon S3 bucket that uses server-side encryption with customer-provided encryption keys (SSE-C). Configure a 30-day key rotation schedule for the customer key.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Dushank
1 month, 4 weeks ago
Selected Answer: B
The most secure and automated way to handle database credential rotation is to use AWS Secrets Manager. Secrets Manager can automatically rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. You can configure Secrets Manager to automatically rotate the secrets for you according to a schedule you specify, making it easier to adhere to best practices for security.
upvoted 3 times
...
RaidenKurosaki
3 months ago
Selected Answer: B
Secrets Manager supports auto rotation. Systems Manager does not do that.
upvoted 2 times
...
Parsons
3 months ago
Selected Answer: B
B is correct. Keyword: "automatically change the database password every 30 days"
upvoted 2 times
...
cloudenthusiast
3 months ago
Selected Answer: B
Secrets Manager supports automatic rotation of secrets by using either built-in or custom Lambda functions
upvoted 3 times
niks1221
3 months ago
DId you give your exam recently? If yes, how many questions were from here?
upvoted 1 times
...
...
Question #136 Topic 1

A developer is setting up a deployment pipeline. The pipeline includes an AWS CodeBuild build stage that requires access to a database to run integration tests. The developer is using a buildspec.yml file to configure the database connection. Company policy requires automatic rotation of all database credentials.

Which solution will handle the database credentials MOST securely?

  • A. Retrieve the credentials from variables that are hardcoded in the buildspec.yml file. Configure an AWS Lambda function to rotate the credentials.
  • B. Retrieve the credentials from an environment variable that is linked to a SecureString parameter in AWS Systems Manager Parameter Store. Configure Parameter Store for automatic rotation.
  • C. Retrieve the credentials from an environment variable that is linked to an AWS Secrets Manager secret. Configure Secrets Manager for automatic rotation.
  • D. Retrieve the credentials from an environment variable that contains the connection string in plaintext. Configure an Amazon EventBridge event to rotate the credentials.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Gold07
1 month, 1 week ago
c is the correct answer
upvoted 2 times
...
cmonthatsme
3 months ago
Selected Answer: C
Secure + Rotation are key words for Secrets Manager
upvoted 3 times
...
Parsons
3 months ago
Selected Answer: C
C is correct. Explanation: "requires automatic rotation of all database credentials" => "Secrets Manager for automatic rotation." With the Systems Manager Parameter Store, you have to do that manually.
upvoted 3 times
...
cloudenthusiast
3 months ago
Selected Answer: C
Because configure Secrets Manager for automatic rotation
upvoted 2 times
...
Question #137 Topic 1

A company is developing a serverless multi-tier application on AWS. The company will build the serverless logic tier by using Amazon API Gateway and AWS Lambda.
While the company builds the logic tier, a developer who works on the frontend of the application must develop integration tests. The tests must cover both positive and negative scenarios, depending on success and error HTTP status codes.

Which solution will meet these requirements with the LEAST effort?

  • A. Set up a mock integration for API methods in API Gateway. In the integration request from Method Execution, add simple logic to return either a success or error based on HTTP status code. In the integration response, add messages that correspond to the HTTP status codes.
  • B. Create two mock integration resources for API methods in API Gateway. In the integration request, return a success HTTP status code for one resource and an error HTTP status code for the other resource. In the integration response, add messages that correspond to the HTTP status codes.
  • C. Create Lambda functions to perform tests. Add simple logic to return either success or error, based on the HTTP status codes. Build an API Gateway Lambda integration. Select appropriate Lambda functions that correspond to the HTTP status codes.
  • D. Create a Lambda function to perform tests. Add simple logic to return either success or error-based HTTP status codes. Create a mock integration in API Gateway. Select the Lambda function that corresponds to the HTTP status codes.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
A (90%)
10%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Parsons
Highly Voted 3 months ago
Selected Answer: A
A is correct (with the LEAST effort) "API Gateway supports mock integrations for API methods" "As an API developer, you decide how API Gateway responds to a mock integration request. For this, you configure the method's integration request and integration response to associate a response with a given status code. " https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-mock-integration.html
upvoted 6 times
...
[Removed]
Most Recent 2 months, 4 weeks ago
Selected Answer: B
The tests must cover both positive and negative scenarios, depending on success and error HTTP status codes.
upvoted 1 times
...
cloudenthusiast
3 months ago
Selected Answer: A
A because set up a mock integration for API methods in API Gateway with the least effort.
upvoted 3 times
...
Question #138 Topic 1

Users are reporting errors in an application. The application consists of several microservices that are deployed on Amazon Elastic Container Service (Amazon ECS) with AWS Fargate.

Which combination of steps should a developer take to fix the errors? (Choose two.)

  • A. Deploy AWS X-Ray as a sidecar container to the microservices. Update the task role policy to allow access to the X-Ray API.
  • B. Deploy AWS X-Ray as a daemonset to the Fargate cluster. Update the service role policy to allow access to the X-Ray API.
  • C. Instrument the application by using the AWS X-Ray SDK. Update the application to use the PutXrayTrace API call to communicate with the X-Ray API.
  • D. Instrument the application by using the AWS X-Ray SDK. Update the application to communicate with the X-Ray daemon.
  • E. Instrument the ECS task to send the stdout and stderr output to Amazon CloudWatch Logs. Update the task role policy to allow the cloudwatch:PullLogs action.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
D (57%)
A (29%)
14%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MG1407
Highly Voted 2 months, 3 weeks ago
AD A. You can only use X-ray with Fargate as a side car because there is not EC2 image. D. https://github.com/aws-samples/aws-xray-fargate
upvoted 9 times
Nagasoracle
2 weeks, 5 days ago
I agree - AD https://github.com/aws-samples/aws-xray-fargate
upvoted 1 times
...
Iamtany
1 month, 3 weeks ago
With AWS Fargate, there are no EC2 instances to install the X-Ray daemon onto. However, the X-Ray daemon is actually provided automatically with Fargate - it runs as an additional container alongside the application containers in the task. So there is no need to deploy it as a sidecar. When using X-Ray with Fargate, you just need to: Instrument the application code with the X-Ray SDK The SDK will communicate with the daemon container provided by Fargate So you're right that there are no EC2 hosts to install daemons on directly. But Fargate handles running the X-Ray daemon automatically as part of the task, eliminating the need for a sidecar. The SDK can communicate with the daemon container transparently.
upvoted 2 times
...
...
Passexam4sure_com
Most Recent 3 weeks, 2 days ago
Selected Answer: D
Instrument the application by using the AWS X-Ray SDK. Update the application to communicate with the X-Ray daemon
upvoted 1 times
...
Claire_KMT
3 weeks, 3 days ago
D. Instrument the application by using the AWS X-Ray SDK. Update the application to communicate with the X-Ray daemon. E. Instrument the ECS task to send the stdout and stderr output to Amazon CloudWatch Logs. Update the task role policy to allow the cloudwatch:PullLogs action.
upvoted 1 times
...
fossil123
2 months ago
Selected Answer: A
AD is correct. A - X-Ray container as a "Side car" in ECS/Fargate cluster D - Instrument the application using the AWS X-Ray SDK to collect telemetry data.
upvoted 2 times
...
love777
2 months, 2 weeks ago
Selected Answer: D
D and E Option D: Instrumenting the application using the AWS X-Ray SDK is essential for collecting traces and telemetry data. The X-Ray SDK helps you identify bottlenecks, errors, and other issues within your microservices. Communicating with the X-Ray daemon allows your microservices to send trace data to X-Ray for analysis and visualization. This requires minimal configuration and is efficient for capturing and analyzing traces. Option E: Instrumenting the ECS task to send the application's standard output (stdout) and standard error (stderr) logs to Amazon CloudWatch Logs provides visibility into the application's behavior, errors, and issues. Updating the task role policy to allow the cloudwatch:PullLogs action ensures that the ECS task has the necessary permissions to access and send logs to CloudWatch Logs.
upvoted 3 times
...
AWSdeveloper08
2 months, 4 weeks ago
Selected Answer: C
Answer is CE To diagnose and fix errors in an application deployed on Amazon ECS with AWS Fargate using AWS X-Ray, you should take the following steps: C. Instrument the application by using the AWS X-Ray SDK. Update the application to use the PutXrayTrace API call to communicate with the X-Ray API. Instrumenting the application using the AWS X-Ray SDK allows you to capture traces and data about requests as they flow through your application's components. E. Instrument the ECS task to send the stdout and stderr output to Amazon CloudWatch Logs. Update the task role policy to allow the cloudwatch:PullLogs action. This step will help you capture logs from your microservices, which can provide additional insights into the errors and issues occurring within the application.
upvoted 1 times
...
Question #139 Topic 1

A developer is creating an application for a company. The application needs to read the file doc.txt that is placed in the root folder of an Amazon S3 bucket that is named DOC-EXAMPLE-BUCKET. The company’s security team requires the principle of least privilege to be applied to the application’s IAM policy.

Which IAM policy statement will meet these security requirements?

  • A.
  • B.
  • C.
  • D.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Gadu
3 months ago
Selected Answer: A
Only read permission for the file
upvoted 3 times
...
cmonthatsme
3 months ago
Selected Answer: A
Only allow to get this one file. A
upvoted 3 times
...
Question #140 Topic 1

A company has an application that uses AWS CodePipeline to automate its continuous integration and continuous delivery (CI/CD) workflow. The application uses AWS CodeCommit for version control. A developer who was working on one of the tasks did not pull the most recent changes from the main branch. A week later, the developer noticed merge conflicts.

How can the developer resolve the merge conflicts in the developer's branch with the LEAST development effort?

  • A. Clone the repository. Create a new branch. Update the branch with the changes.
  • B. Create a new branch. Apply the changes from the previous branch.
  • C. Use the Commit Visualizer view to compare the commits when a feature was added. Fix the merge conflicts.
  • D. Stop the pull from the main branch to the feature branch. Rebase the feature branch from the main branch.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (77%)
C (23%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Passexam4sure_com
3 weeks, 2 days ago
D D. Stop the pull from the main branch to the feature branch. Rebase the feature branch from the main branch.
upvoted 1 times
...
Claire_KMT
3 weeks, 3 days ago
D. Stop the pull from the main branch to the feature branch. Rebase the feature branch from the main branch.
upvoted 1 times
...
Iamtany
1 month, 3 weeks ago
Selected Answer: D
Rebasing the feature branch from the main branch would apply the changes from the main branch directly onto the feature branch, effectively bringing it up to date. This would resolve the conflicts in a way that minimizes manual effort.
upvoted 3 times
...
DhiegoPimenta
2 months, 1 week ago
Selected Answer: D
Option D is the best approach for resolving the merge conflicts
upvoted 2 times
...
love777
2 months, 2 weeks ago
Selected Answer: D
Option D is the best approach for resolving the merge conflicts with minimal development effort. Here's how it works: Stop Pull from Main: By stopping the pull from the main branch to the feature branch, the developer can prevent the introduction of new conflicts while they are resolving the existing ones. Rebase the Feature Branch: After stopping the pull, the developer can rebase the feature branch onto the main branch. This essentially replays the feature branch's changes on top of the main branch's latest changes. This allows the developer to resolve conflicts one commit at a time, addressing any conflicts that arise from the difference between the feature branch and the main branch.
upvoted 4 times
...
[Removed]
2 months, 4 weeks ago
Selected Answer: D
Using the git rebase command to rebase a repository changes the history of a repository, which might cause commits to appear out of order. https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-view-commit-details.html
upvoted 1 times
...
AWSdeveloper08
2 months, 4 weeks ago
Selected Answer: C
Comparing commits in the Commit Visualizer view can provide a clear overview of the changes made over time and aid in understanding the context of the conflicts. This approach can help you pinpoint where conflicts arose and assist you in making informed decisions about how to resolve them.
upvoted 2 times
...
worseforwear
3 months ago
Selected Answer: C
Answer D won't fix the problem
upvoted 1 times
Cerakoted
3 weeks, 5 days ago
I think C would take huge development effort
upvoted 1 times
...
...
Question #141 Topic 1

A developer wants to add request validation to a production environment Amazon API Gateway API. The developer needs to test the changes before the API is deployed to the production environment. For the test, the developer will send test requests to the API through a testing tool.

Which solution will meet these requirements with the LEAST operational overhead?

  • A. Export the existing API to an OpenAPI file. Create a new API. Import the OpenAPI file. Modify the new API to add request validation. Perform the tests. Modify the existing API to add request validation. Deploy the existing API to production.
  • B. Modify the existing API to add request validation. Deploy the updated API to a new API Gateway stage. Perform the tests. Deploy the updated API to the API Gateway production stage.
  • C. Create a new API. Add the necessary resources and methods, including new request validation. Perform the tests. Modify the existing API to add request validation. Deploy the existing API to production
  • D. Clone the existing API. Modify the new API to add request validation. Perform the tests. Modify the existing API to add request validation. Deploy the existing API to production.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
AWSdeveloper08
Highly Voted 2 months, 4 weeks ago
Selected Answer: B
In this option, you are making changes directly to the existing API, adding request validation. Then, you deploy the updated API to a new API Gateway stage, which allows you to test the changes without affecting the production environment. After performing the tests and ensuring everything works as expected, you can then deploy the updated API to the production stage, thus minimizing operational overhead.
upvoted 6 times
...
imyashkale
Most Recent 1 month, 3 weeks ago
Selected Answer: B
It looks Correct
upvoted 2 times
...
Question #142 Topic 1

An online food company provides an Amazon API Gateway HTTP API to receive orders for partners. The API is integrated with an AWS Lambda function. The Lambda function stores the orders in an Amazon DynamoDB table.

The company expects to onboard additional partners. Some of the partners require additional Lambda functions to receive orders. The company has created an Amazon S3 bucket. The company needs to store all orders and updates in the S3 bucket for future analysis.

How can the developer ensure that all orders and updates are stored to Amazon S3 with the LEAST development effort?

  • A. Create a new Lambda function and a new API Gateway API endpoint. Configure the new Lambda function to write to the S3 bucket. Modify the original Lambda function to post updates to the new API endpoint.
  • B. Use Amazon Kinesis Data Streams to create a new data stream. Modify the Lambda function to publish orders to the data stream. Configure the data stream to write to the S3 bucket.
  • C. Enable DynamoDB Streams on the DynamoDB table. Create a new Lambda function. Associate the stream’s Amazon Resource Name (ARN) with the Lambda function. Configure the Lambda function to write to the S3 bucket as records appear in the table's stream.
  • D. Modify the Lambda function to publish to a new Amazon Simple Notification Service (Amazon SNS) topic as the Lambda function receives orders. Subscribe a new Lambda function to the topic. Configure the new Lambda function to write to the S3 bucket as updates come through the topic.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
AWSdeveloper08
Highly Voted 2 months, 4 weeks ago
Selected Answer: C
By enabling DynamoDB Streams on the DynamoDB table, you can capture changes (orders and updates) to the table. Whenever a new order or an update is made to the table, a stream record is generated. You can then create a new Lambda function, associate the stream's ARN with this Lambda function, and configure it to write the stream records (orders and updates) to the S3 bucket. This approach leverages built-in features of DynamoDB and Lambda, minimizing the development effort required to achieve the desired outcome.
upvoted 5 times
...
Dushank
Most Recent 1 month, 4 weeks ago
Selected Answer: C
Enabling DynamoDB Streams on the existing DynamoDB table and associating a new Lambda function to it would be a straightforward way to capture all changes (new orders and updates) in the DynamoDB table. The new Lambda function would automatically be triggered when a new record appears in the table's stream and could be configured to write this data to the S3 bucket. This is likely the least effort-intensive approach for meeting the requirement.
upvoted 3 times
...
Question #143 Topic 1

A company’s website runs on an Amazon EC2 instance and uses Auto Scaling to scale the environment during peak times. Website users across the world are experiencing high latency due to static content on the EC2 instance, even during non-peak hours.

Which combination of steps will resolve the latency issue? (Choose two.)

  • A. Double the Auto Scaling group’s maximum number of servers.
  • B. Host the application code on AWS Lambda.
  • C. Scale vertically by resizing the EC2 instances.
  • D. Create an Amazon CloudFront distribution to cache the static content.
  • E. Store the application’s static content in Amazon S3.
Reveal Solution Hide Solution

Correct Answer: DE 🗳️

Community vote distribution
DE (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Digo30sp
1 month ago
Selected Answer: DE
Option (D), creating an Amazon CloudFront distribution to cache static content, is the most recommended solution. CloudFront is a global content delivery network (CDN) that can cache static content on servers distributed around the world. This can help significantly reduce latency for users around the world. Option (E), storing your application's static content in Amazon S3, can also help reduce latency. S3 is a high-performance object storage service that can be used to store static content.
upvoted 2 times
...
Question #144 Topic 1

A company has an Amazon S3 bucket containing premier content that it intends to make available to only paid subscribers of its website. The S3 bucket currently has default permissions of all objects being private to prevent inadvertent exposure of the premier content to non-paying website visitors.

How can the company limit the ability to download a premier content file in the S3 bucket to paid subscribers only?

  • A. Apply a bucket policy that allows anonymous users to download the content from the S3 bucket.
  • B. Generate a pre-signed object URL for the premier content file when a paid subscriber requests a download.
  • C. Add a bucket policy that requires multi-factor authentication for requests to access the S3 bucket objects.
  • D. Enable server-side encryption on the S3 bucket for data protection against the non-paying website visitors.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Digo30sp
1 month ago
Selected Answer: B
The correct answer is (B). By generating a pre-signed object URL for the main content file when a paid subscriber requests a download, the company can control who can download the file. The pre-signed object URL will be valid for a limited period of time and can only be used by the paid subscriber who requested the download.
upvoted 2 times
...
Question #145 Topic 1

A developer is creating an AWS Lambda function that searches for items from an Amazon DynamoDB table that contains customer contact information. The DynamoDB table items have the customer’s email_address as the partition key and additional properties such as customer_type, name and job_title.

The Lambda function runs whenever a user types a new character into the customer_type text input. The developer wants the search to return partial matches of all the email_address property of a particular customer_type. The developer does not want to recreate the DynamoDB table.

What should the developer do to meet these requirements?

  • A. Add a global secondary index (GSI) to the DynamoDB table with customer_type as the partition key and email_address as the sort key. Perform a query operation on the GSI by using the begins_with key condition expression with the email_address property.
  • B. Add a global secondary index (GSI) to the DynamoDB table with email_address as the partition key and customer_type as the sort key. Perform a query operation on the GSI by using the begins_with key condition expression with the email_address property.
  • C. Add a local secondary index (LSI) to the DynamoDB table with customer_type as the partition key and email_address as the sort key. Perform a query operation on the LSI by using the begins_with key condition expression with the email_address property.
  • D. Add a local secondary index (LSI) to the DynamoDB table with job_title as the partition key and email_address as the sort key. Perform a query operation on the LSI by using the begins_with key condition expression with the email_address property.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Jing2023
3 weeks, 5 days ago
A is correct
upvoted 1 times
...
Patel_ajay745
1 month ago
A Add a global secondary index (GSI) to the DynamoDB table with customer_type as the partition key and email_address as the sort key. Perform a query operation on the GSI by using the begins_with key condition expression with the email_address property.
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: A
The correct answer is (A). By adding a global secondary index (GSI) to the DynamoDB table with customer_type as the partition key and email_address as the sort key, the developer can perform a query operation on the GSI using the Begins_with key condition expression with the email_address property. This will return partial matches of all email_address properties of a specific customer_type.
upvoted 4 times
...
Question #146 Topic 1

A developer is building an application that uses AWS API Gateway APIs, AWS Lambda functions, and AWS DynamoDB tables. The developer uses the AWS Serverless Application Model (AWS SAM) to build and run serverless applications on AWS. Each time the developer pushes changes for only to the Lambda functions, all the artifacts in the application are rebuilt.

The developer wants to implement AWS SAM Accelerate by running a command to only redeploy the Lambda functions that have changed.

Which command will meet these requirements?

  • A. sam deploy --force-upload
  • B. sam deploy --no-execute-changeset
  • C. sam package
  • D. sam sync --watch
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 6 days ago
Selected Answer: D
D is correct
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: D
The correct answer is (D). The sam sync --watch command will only deploy the Lambda functions that have changed. This command uses AWS SAM Accelerate to compare the local versions of your Lambda functions to the versions deployed in AWS. If there are differences, the command deploys only the changed Lambda functions.
upvoted 3 times
...
Question #147 Topic 1

A developer is building an application that gives users the ability to view bank accounts from multiple sources in a single dashboard. The developer has automated the process to retrieve API credentials for these sources. The process invokes an AWS Lambda function that is associated with an AWS CloudFormation custom resource.

The developer wants a solution that will store the API credentials with minimal operational overhead.

Which solution will meet these requirements in the MOST secure way?

  • A. Add an AWS Secrets Manager GenerateSecretString resource to the CloudFormation template. Set the value to reference new credentials for the CloudFormation resource.
  • B. Use the AWS SDK ssm:PutParameter operation in the Lambda function from the existing custom resource to store the credentials as a parameter. Set the parameter value to reference the new credentials. Set the parameter type to SecureString.
  • C. Add an AWS Systems Manager Parameter Store resource to the CloudFormation template. Set the CloudFormation resource value to reference the new credentials. Set the resource NoEcho attribute to true.
  • D. Use the AWS SDK ssm:PutParameter operation in the Lambda function from the existing custom resource to store the credentials as a parameter. Set the parameter value to reference the new credentials. Set the parameter NoEcho attribute to true.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
B (57%)
D (43%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
ut18
1 week, 6 days ago
Is B the correct answer? SecureString isn't currently supported for AWS CloudFormation templates. https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutParameter.html
upvoted 1 times
...
Bolu_Jay
2 weeks ago
Answer is A AWS Secrets Manager is specifically designed for securely storing sensitive information like API credentials, database passwords, and other secrets
upvoted 3 times
...
Nagasoracle
2 weeks, 5 days ago
Selected Answer: B
I agree with Jing2023 answer
upvoted 1 times
...
Jing2023
3 weeks, 5 days ago
Answer is B A is not correct as the requirement asked to store API credentials, GenerateSecretString will create a random string as password. C the API credential will be retrieved by the Lambda function, it is un-available to the template. D no echo is a attribute of cloud formation template.
upvoted 3 times
...
dilleman
3 weeks, 6 days ago
Selected Answer: B
B should be correct since the type SecureString encrypts the value i think?
upvoted 3 times
...
Digo30sp
1 month ago
Selected Answer: D
The correct answer is (D). Solution (D) is the most secure because it stores the API credentials in AWS Secrets Manager, which is a managed service that provides secure, policy-controlled storage for secrets. The parameter's NoEcho attribute prevents the parameter value from being displayed in the console or request history.
upvoted 3 times
...
Question #148 Topic 1

A developer is trying to get data from an Amazon DynamoDB table called demoman-table. The developer configured the AWS CLI to use a specific IAM user’s credentials and ran the following command:

aws dynamodb get-item --table-name demoman-table --key '{"id": {"N":"1993"}}'

The command returned errors and no rows were returned.

What is the MOST likely cause of these issues?

  • A. The command is incorrect; it should be rewritten to use put-item with a string argument.
  • B. The developer needs to log a ticket with AWS Support to enable access to the demoman-table.
  • C. Amazon DynamoDB cannot be accessed from the AWS CLI and needs to be called via the REST API.
  • D. The IAM user needs an associated policy with read access to demoman-table.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Jing2023
3 weeks, 5 days ago
Selected Answer: D
D is correct
upvoted 1 times
...
dilleman
3 weeks, 5 days ago
Selected Answer: D
D is correct
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: D
The correct answer is (D). The command is correct and the demoman table exists. The most likely issue is that the IAM user does not have a policy associated with read access to the demoman table. To resolve the issue, the developer must add a policy to the IAM user that grants read access to the demoman table.
upvoted 3 times
...
Question #149 Topic 1

An organization is using Amazon CloudFront to ensure that its users experience low-latency access to its web application. The organization has identified a need to encrypt all traffic between users and CloudFront, and all traffic between CloudFront and the web application.

How can these requirements be met? (Choose two.)

  • A. Use AWS KMS to encrypt traffic between CloudFront and the web application.
  • B. Set the Origin Protocol Policy to “HTTPS Only”.
  • C. Set the Origin’s HTTP Port to 443.
  • D. Set the Viewer Protocol Policy to “HTTPS Only” or “Redirect HTTP to HTTPS”.
  • E. Enable the CloudFront option Restrict Viewer Access.
Reveal Solution Hide Solution

Correct Answer: BD 🗳️

Community vote distribution
BD (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: BD
B and D are the correct ones. B: Setting the Origin Protocol Policy to “HTTPS Only” ensures that CloudFront always uses HTTPS to connect to the origin, which is the web application in this scenario. D: Setting the Viewer Protocol Policy to “HTTPS Only” ensures that CloudFront will only serve requests over HTTPS. Setting it to “Redirect HTTP to HTTPS” ensures that any HTTP request from viewers is redirected to HTTPS.
upvoted 3 times
...
Digo30sp
1 month ago
Selected Answer: BD
The correct answers are (B) and (D). To meet the requirement to encrypt all traffic between users and CloudFront, your organization must set the Viewer Protocol Policy to “HTTPS Only” or “Redirect HTTP to HTTPS”. This will force users to use HTTPS to connect to CloudFront. To meet the requirement to encrypt all traffic between CloudFront and the web application, your organization must set the Origin Protocol Policy to “HTTPS Only”. This will force CloudFront to use HTTPS to connect to the web application.
upvoted 2 times
...
Question #150 Topic 1

A developer is planning to migrate on-premises company data to Amazon S3. The data must be encrypted, and the encryption keys must support automatic annual rotation. The company must use AWS Key Management Service (AWS KMS) to encrypt the data.

Which type of keys should the developer use to meet these requirements?

  • A. Amazon S3 managed keys
  • B. Symmetric customer managed keys with key material that is generated by AWS
  • C. Asymmetric customer managed keys with key material that is generated by AWS
  • D. Symmetric customer managed keys with imported key material
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
A (57%)
B (43%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
wonder_man
1 week, 3 days ago
Selected Answer: B
Only this option supports AWS KMS with the key rotation
upvoted 1 times
...
PrakashM14
3 weeks, 3 days ago
Selected Answer: B
Asymmetric keys (option C) are typically used for different use cases, such as digital signatures and key pairs, and may not be as suitable for automatic rotation in the described scenario. Imported key material (option D) means that you bring your own key material, and AWS KMS doesn't support automatic rotation for such keys. Amazon S3 managed keys (option A) are used specifically for Amazon S3 and don't support automatic rotation. so, option B is correct
upvoted 2 times
...
PrakashM14
3 weeks, 3 days ago
Asymmetric keys (option C) are typically used for different use cases, such as digital signatures and key pairs, and may not be as suitable for automatic rotation in the described scenario. Imported key material (option D) means that you bring your own key material, and AWS KMS doesn't support automatic rotation for such keys. Amazon S3 managed keys (option A) are used specifically for Amazon S3 and don't support automatic rotation. so, option B is correct
upvoted 1 times
...
dilleman
3 weeks, 5 days ago
Selected Answer: A
A: https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingServerSideEncryption.html
upvoted 2 times
...
Digo30sp
1 month ago
Selected Answer: A
A) Amazon S3 Managed Keys https://docs.aws.amazon.com/pt_br/AmazonS3/latest/userguide/serv-side-encryption.html
upvoted 2 times
...
Question #151 Topic 1

A team of developers is using an AWS CodePipeline pipeline as a continuous integration and continuous delivery (CI/CD) mechanism for a web application. A developer has written unit tests to programmatically test the functionality of the application code. The unit tests produce a test report that shows the results of each individual check. The developer now wants to run these tests automatically during the CI/CD process.

Which solution will meet this requirement with the LEAST operational effort?

  • A. Write a Git pre-commit hook that runs the tests before every commit. Ensure that each developer who is working on the project has the pre-commit hook installed locally. Review the test report and resolve any issues before pushing changes to AWS CodeCommit.
  • B. Add a new stage to the pipeline. Use AWS CodeBuild as the provider. Add the new stage after the stage that deploys code revisions to the test environment. Write a buildspec that fails the CodeBuild stage if any test does not pass. Use the test reports feature of CodeBuild to integrate the report with the CodeBuild console. View the test results in CodeBuild. Resolve any issues.
  • C. Add a new stage to the pipeline. Use AWS CodeBuild as the provider. Add the new stage before the stage that deploys code revisions to the test environment. Write a buildspec that fails the CodeBuild stage if any test does not pass. Use the test reports feature of CodeBuild to integrate the report with the CodeBuild console. View the test results in CodeBuild. Resolve any issues.
  • D. Add a new stage to the pipeline. Use Jenkins as the provider. Configure CodePipeline to use Jenkins to run the unit tests. Write a Jenkinsfile that fails the stage if any test does not pass. Use the test report plugin for Jenkins to integrate the report with the Jenkins dashboard. View the test results in Jenkins. Resolve any issues.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
C (83%)
B (17%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
NinjaCloud
1 week, 2 days ago
Correct answer: B
upvoted 1 times
...
Gold07
3 weeks, 3 days ago
c is the correct answer
upvoted 1 times
...
Cerakoted
3 weeks, 5 days ago
Selected Answer: C
I think C is correct. Typical consists of stages are.. Build -> Test -> Deploy(test) -> Load Test -> and others
upvoted 2 times
...
dilleman
3 weeks, 6 days ago
Selected Answer: C
C should be correct.
upvoted 3 times
...
Digo30sp
1 month ago
Selected Answer: B
The correct answer is (B). Solution (B) is the simplest and requires the least operational effort. It involves adding a new stage to the CodePipeline pipeline that uses AWS CodeBuild to run the unit tests. The CodeBuild stage can be configured to fail if any tests fail. The CodeBuild test report can be integrated into the CodeBuild console so that developers can view test results.
upvoted 1 times
dilleman
3 weeks, 6 days ago
This does not make sense. Why run the tests after the deploy when you can choose option C, to run the tests before the deploy? C should be best practice and the same amount of effort as B.
upvoted 3 times
Dibaal
1 week, 5 days ago
funny 😁
upvoted 1 times
...
...
...
Question #152 Topic 1

A company has multiple Amazon VPC endpoints in the same VPC. A developer needs to configure an Amazon S3 bucket policy so users can access an S3 bucket only by using these VPC endpoints.

Which solution will meet these requirements?

  • A. Create multiple S3 bucket polices by using each VPC endpoint ID that have the aws:SourceVpce value in the StringNotEquals condition.
  • B. Create a single S3 bucket policy that has the aws:SourceVpc value and in the StringNotEquals condition to use VPC ID.
  • C. Create a single S3 bucket policy that has the aws:SourceVpce value and in the StringNotEquals condition to use vpce*.
  • D. Create a single S3 bucket policy that has multiple aws:sourceVpce value in the StringNotEquals condition. Repeat for all the VPC endpoint IDs.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
D (83%)
C (17%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
PrakashM14
3 weeks, 3 days ago
Selected Answer: D
in option C : Condition": { "StringNotEqualsIfExists": { "aws:sourceVpce": "vpce*", } } it might Deny access from all VPC endpoints. so the ans is D
upvoted 2 times
ekutas
4 days, 10 hours ago
D says "aws:sourceVpce value in the StringNotEquals condition". StringNotEquals won't work, it deny access for specified VPC ids
upvoted 1 times
ekutas
4 days, 10 hours ago
Od course if we use "Effect": "Allow"))
upvoted 1 times
...
...
...
dilleman
3 weeks, 6 days ago
Selected Answer: D
C works as well but It is a broad solution I think it's better practice to use D and specify the exact endpoints that the user can access from. "aws:sourceVpce": ["vpce-id1", "vpce-id2", "..."]
upvoted 3 times
...
Digo30sp
1 month ago
Selected Answer: C
The correct answer is (C). Solution (C) is the simplest and will meet the company's requirements. It creates a single S3 bucket policy that has the value aws:SourceVpce and the StringNotEquals condition to use vpce*. This will only allow users who are using a VPC endpoint in the same VPC to access the S3 bucket.
upvoted 1 times
...
Question #153 Topic 1

A company uses a custom root certificate authority certificate chain (Root CA Cert) that is 10 KB in size to generate SSL certificates for its on-premises HTTPS endpoints. One of the company’s cloud-based applications has hundreds of AWS Lambda functions that pull data from these endpoints. A developer updated the trust store of the Lambda execution environment to use the Root CA Cert when the Lambda execution environment is initialized. The developer bundled the Root CA Cert as a text file in the Lambda deployment bundle.

After 3 months of development, the Root CA Cert is no longer valid and must be updated. The developer needs a more efficient solution to update the Root CA Cert for all deployed Lambda functions. The solution must not include rebuilding or updating all Lambda functions that use the Root CA Cert. The solution must also work for all development, testing, and production environments. Each environment is managed in a separate AWS account.

Which combination of steps should the developer take to meet these requirements MOST cost-effectively? (Choose two.)

  • A. Store the Root CA Cert as a secret in AWS Secrets Manager. Create a resource-based policy. Add IAM users to allow access to the secret.
  • B. Store the Root CA Cert as a SecureString parameter in AWS Systems Manager Parameter Store. Create a resource-based policy. Add IAM users to allow access to the policy.
  • C. Store the Root CA Cert in an Amazon S3 bucket. Create a resource-based policy to allow access to the bucket.
  • D. Refactor the Lambda code to load the Root CA Cert from the Root CA Cert’s location. Modify the runtime trust store inside the Lambda function handler.
  • E. Refactor the Lambda code to load the Root CA Cert from the Root CA Cert’s location. Modify the runtime trust store outside the Lambda function handler.
Reveal Solution Hide Solution

Correct Answer: CE 🗳️

Community vote distribution
AE (43%)
BE (29%)
14%
7%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
kiwtirApp
Highly Voted 3 weeks, 1 day ago
Selected Answer: AE
The max size of storage in Secrets Manager is 10kb. For SSM Parameter store, it's 8Kb. Correct options are A and E.
upvoted 6 times
...
wonder_man
Most Recent 1 week, 3 days ago
Selected Answer: CE
I can't see why using AWS Secrets Manager can be cost-effective, so I'm voting for C
upvoted 1 times
...
Rameez1
2 weeks, 1 day ago
Selected Answer: BE
Using Parameter store is more cost effective then secrets manager.
upvoted 2 times
...
TallManDan
2 weeks, 6 days ago
Secrets Manager is an additional cost over Parameter Store. So if you see a question that looks for the least amount of overhead, Secrets Manager is much more versatile. But for least amount of cost, Parameter Store is included with the service for no additional costs.
upvoted 2 times
...
PrakashM14
3 weeks, 3 days ago
Selected Answer: BC
Why the remaining answers are not suitable: A. Storing the Root CA Cert in AWS Secrets Manager is a valid option, but Secrets Manager is typically used for managing sensitive information like database credentials. It might be overkill for just a certificate, and using Systems Manager Parameter Store or S3 is a more straightforward solution in this case. D. Refactoring the Lambda code to load the Root CA Cert from its location and modifying the runtime trust store inside the Lambda function handler would require code changes and rebuilding the Lambda functions, which contradicts the requirement of not updating all Lambda functions. E. Refactoring the Lambda code to load the Root CA Cert from its location and modifying the runtime trust store outside the Lambda function handler may still require code changes and may not be as scalable or easily manageable as using Systems Manager Parameter Store or S3.
upvoted 1 times
...
dilleman
3 weeks, 6 days ago
Selected Answer: BE
B. AWS Systems Manager Parameter Store can store data both in plain text and encrypted format (using the SecureString type). It's a cost-effective solution for centralized configuration management across environments and accounts. E. Modifying the runtime trust store outside the Lambda function handler ensures that the trust store is modified only once when the Lambda container is initialized, making it a more efficient approach than option D where it's initlialized in every lambda function.
upvoted 2 times
...
Digo30sp
1 month ago
Selected Answer: AD
the correct answers are (A) and (D). Solution (A) is the most cost-effective as it uses AWS Secrets Manager, which is a managed service. The developer can simply store the root CA certificate as a secret in Secrets Manager and create a resource-based policy to allow IAM users to access the secret. This does not require any modifications to the Lambda code. Solution (D) is also cost-effective as it does not require any modifications to the Lambda code. The developer can simply refactor the Lambda code to load the root CA certificate from the root CA certificate location. This can be done by modifying the runtime trust store outside of the Lambda function handler.
upvoted 2 times
...
Question #154 Topic 1

A developer maintains applications that store several secrets in AWS Secrets Manager. The applications use secrets that have changed over time. The developer needs to identify required secrets that are still in use. The developer does not want to cause any application downtime.

What should the developer do to meet these requirements?

  • A. Configure an AWS CloudTrail log file delivery to an Amazon S3 bucket. Create an Amazon CloudWatch alarm for the GetSecretValue Secrets Manager API operation requests.
  • B. Create a secretsmanager-secret-unused AWS Config managed rule. Create an Amazon EventBridge rule to initiate notifications when the AWS Config managed rule is met.
  • C. Deactivate the applications secrets and monitor the applications error logs temporarily.
  • D. Configure AWS X-Ray for the applications. Create a sampling rule to match the GetSecretValue Secrets Manager API operation requests.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
B (71%)
A (29%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
chris_777
2 days, 3 hours ago
Selected Answer: B
I think B is correct https://docs.aws.amazon.com/config/latest/developerguide/secretsmanager-secret-unused.html A. could work but requires additional work to identify unused secrets. C. is too risky and could cause downtime. D. not the right use case
upvoted 1 times
...
LemonGremlin
2 weeks, 4 days ago
Selected Answer: B
B is correct for this one.
upvoted 1 times
...
dilleman
3 weeks, 5 days ago
Selected Answer: A
A is correct. . AWS CloudTrail can track API calls, including the GetSecretValue call for AWS Secrets Manager. By setting up CloudTrail log delivery to an S3 bucket, the developer can analyze which secrets are being accessed. Using CloudWatch to create an alarm for the GetSecretValue API call provides insight into which secrets are actively being retrieved, thus indicating which secrets are in use.
upvoted 2 times
dilleman
3 weeks, 3 days ago
I think i change my mind to B. B Must be correct..
upvoted 3 times
...
...
Digo30sp
1 month ago
Selected Answer: B
The correct answer is (B). Solution (B) is the best option to meet the developer's requirements. It allows the developer to identify necessary secrets that are still in use without causing any application downtime.
upvoted 3 times
...
Question #155 Topic 1

A developer is writing a serverless application that requires an AWS Lambda function to be invoked every 10 minutes.

What is an automated and serverless way to invoke the function?

  • A. Deploy an Amazon EC2 instance based on Linux, and edit its /etc/crontab file by adding a command to periodically invoke the Lambda function.
  • B. Configure an environment variable named PERIOD for the Lambda function. Set the value to 600.
  • C. Create an Amazon EventBridge rule that runs on a regular schedule to invoke the Lambda function.
  • D. Create an Amazon Simple Notification Service (Amazon SNS) topic that has a subscription to the Lambda function with a 600-second timer.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: C
C is correct. Amazon EventBridge can be used to run Lambda functions on a regular schedule. You can set a cron or rate expression to define the schedule.
upvoted 3 times
...
Digo30sp
1 month ago
Selected Answer: C
The correct answer is (C). Solution (C) is the best option to meet the developer's requirements. It allows the developer to invoke the Lambda function in an automated and serverless way.
upvoted 2 times
...
Question #156 Topic 1

A company is using Amazon OpenSearch Service to implement an audit monitoring system. A developer needs to create an AWS CloudFormation custom resource that is associated with an AWS Lambda function to configure the OpenSearch Service domain. The Lambda function must access the OpenSearch Service domain by using OpenSearch Service internal master user credentials.

What is the MOST secure way to pass these credentials to the Lambda function?

  • A. Use a CloudFormation parameter to pass the master user credentials at deployment to the OpenSearch Service domain’s MasterUserOptions and the Lambda function’s environment variable. Set the NoEcho attribute to true.
  • B. Use a CloudFormation parameter to pass the master user credentials at deployment to the OpenSearch Service domain’s MasterUserOptions and to create a parameter in AWS Systems Manager Parameter Store. Set the NoEcho attribute to true. Create an IAM role that has the ssm:GetParameter permission. Assign the role to the Lambda function. Store the parameter name as the Lambda function’s environment variable. Resolve the parameter’s value at runtime.
  • C. Use a CloudFormation parameter to pass the master user credentials at deployment to the OpenSearch Service domain’s MasterUserOptions and the Lambda function’s environment variable. Encrypt the parameter’s value by using the AWS Key Management Service (AWS KMS) encrypt command.
  • D. Use CloudFormation to create an AWS Secrets Manager secret. Use a CloudFormation dynamic reference to retrieve the secret’s value for the OpenSearch Service domain’s MasterUserOptions. Create an IAM role that has the secretsmanager:GetSecretValue permission. Assign the role to the Lambda function. Store the secret’s name as the Lambda function’s environment variable. Resolve the secret’s value at runtime.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: D
D is correct.
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: D
The correct answer is (D). Solution (D) is the most secure way to pass the credentials to the Lambda function because it uses AWS Secrets Manager to store the credentials in encrypted form.
upvoted 2 times
...
Question #157 Topic 1

An application runs on multiple EC2 instances behind an ELB.

Where is the session data best written so that it can be served reliably across multiple requests?

  • A. Write data to Amazon ElastiCache.
  • B. Write data to Amazon Elastic Block Store.
  • C. Write data to Amazon EC2 Instance Store.
  • D. Write data to the root filesystem.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: A
A is correct. By storing session data in ElastiCache, you ensure that regardless of which EC2 instance handles a given request, the session data can be consistently and rapidly accessed.
upvoted 2 times
...
Digo30sp
1 month ago
Selected Answer: A
The correct answer is (A). Amazon ElastiCache is a distributed memory caching solution that is ideal for session data. ElastiCache provides high-performance and durable session data storage that can be shared across multiple EC2 instances.
upvoted 2 times
...
Question #158 Topic 1

An ecommerce application is running behind an Application Load Balancer. A developer observes some unexpected load on the application during non-peak hours. The developer wants to analyze patterns for the client IP addresses that use the application.

Which HTTP header should the developer use for this analysis?

  • A. The X-Forwarded-Proto header
  • B. The X-Forwarded-Host header
  • C. The X-Forwarded-For header
  • D. The X-Forwarded-Port header
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
chris_777
2 days, 3 hours ago
Selected Answer: C
C is correct. X-Forwarded-Proto: protocol (HTTP/HTTPS) X-Forwarded-Host: original Host header requested by the client X-Forwarded-For: original IP address of a client (CORRECT) X-Forwarded-Port header: original port that the client used to connect
upvoted 1 times
...
tapan666
1 week, 2 days ago
Selected Answer: C
C is correct
upvoted 1 times
...
dilleman
3 weeks, 5 days ago
Selected Answer: C
C is correct
upvoted 1 times
...
Cerakoted
3 weeks, 6 days ago
Selected Answer: C
X-Forwarded-For HTTP header contains the IP address of the original client
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: C
The correct answer is (C). The X-Forwarded-For HTTP header contains the IP address of the original client that made the request. The developer can use this header to analyze patterns for the IP addresses of clients using the application.
upvoted 2 times
...
Question #159 Topic 1

A developer migrated a legacy application to an AWS Lambda function. The function uses a third-party service to pull data with a series of API calls at the end of each month. The function then processes the data to generate the monthly reports. The function has been working with no issues so far.

The third-party service recently issued a restriction to allow a fixed number of API calls each minute and each day. If the API calls exceed the limit for each minute or each day, then the service will produce errors. The API also provides the minute limit and daily limit in the response header. This restriction might extend the overall process to multiple days because the process is consuming more API calls than the available limit.

What is the MOST operationally efficient way to refactor the serverless application to accommodate this change?

  • A. Use an AWS Step Functions state machine to monitor API failures. Use the Wait state to delay calling the Lambda function.
  • B. Use an Amazon Simple Queue Service (Amazon SQS) queue to hold the API calls. Configure the Lambda function to poll the queue within the API threshold limits.
  • C. Use an Amazon CloudWatch Logs metric to count the number of API calls. Configure an Amazon CloudWatch alarm that stops the currently running instance of the Lambda function when the metric exceeds the API threshold limits.
  • D. Use Amazon Kinesis Data Firehose to batch the API calls and deliver them to an Amazon S3 bucket with an event notification to invoke the Lambda function.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (60%)
A (40%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
wonder_man
1 week, 3 days ago
Selected Answer: A
B: I don't see how the Lamba function can be configured this way
upvoted 1 times
...
dilleman
3 weeks, 5 days ago
Selected Answer: A
A is Correct. AWS Step Functions can be used to create a workflow to handle the API calls. You can make the Lambda function inspect the response headers from the third-party service to determine the current API call limits and then pass that to the Wait state of the state machine for proper delays.
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: B
The correct answer is (B). Solution (B) is the most operationally efficient way to refactor the serverless application to accommodate this change. This solution allows the Lambda function to continue executing API calls even if the API call limit is reached. The Amazon SQS queue will act as a buffer for API calls that exceed the limit. The Lambda function can then poll the queue within the API limits.
upvoted 3 times
...
Question #160 Topic 1

A developer must analyze performance issues with production-distributed applications written as AWS Lambda functions. These distributed Lambda applications invoke other components that make up the applications.

How should the developer identify and troubleshoot the root cause of the performance issues in production?

  • A. Add logging statements to the Lambda functions, then use Amazon CloudWatch to view the logs.
  • B. Use AWS CloudTrail and then examine the logs.
  • C. Use AWS X-Ray, then examine the segments and errors.
  • D. Run Amazon Inspector agents and then analyze performance.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: C
C is correct.
upvoted 2 times
...
Digo30sp
1 month ago
Selected Answer: C
The correct answer is (C). AWS X-Ray is the best tool for identifying and addressing the root cause of performance issues in distributed production applications. X-Ray provides an overview of the entire call stack, including the Lambda functions and other components they invoke.
upvoted 2 times
...
Question #161 Topic 1

A developer wants to deploy a new version of an AWS Elastic Beanstalk application. During deployment, the application must maintain full capacity and avoid service interruption. Additionally, the developer must minimize the cost of additional resources that support the deployment.

Which deployment method should the developer use to meet these requirements?

  • A. All at once
  • B. Rolling with additional batch
  • C. Blue/green
  • D. Immutable
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (55%)
D (27%)
C (18%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Roimasu
1 week ago
Selected Answer: D
This method performs updates by launching a new set of instances in a new Auto Scaling group. Once the new instances pass health checks, they are moved into the existing Auto Scaling group, and the old instances are terminated. This method ensures full capacity, avoids downtime, and minimizes additional costs because it does not double the environment's running resources for an extended period. It adds resources temporarily and only in the amount necessary to maintain capacity.
upvoted 1 times
...
NinjaCloud
1 week, 1 day ago
Shoulc be B "Ultimately, the choice between "Rolling with additional batch" and "Blue/green" deployments should depend on your specific requirements and constraints. If maintaining full capacity is a crucial factor, then "Rolling with additional batch" could be the better choice."
upvoted 1 times
...
ut18
1 week, 5 days ago
MS Bing answer: B vs Chag GPT answer: C Your choice?
upvoted 1 times
...
Nagasoracle
2 weeks, 5 days ago
Selected Answer: B
B: Rolling with additional batch , considering "minimize the cost of additional resources" C costly than B, due to double capacity
upvoted 4 times
...
Learning4life
3 weeks, 2 days ago
C and D are wrong, since they both require additional resources.
upvoted 1 times
...
joosh96
3 weeks, 4 days ago
Selected Answer: C
chat gpt replied
upvoted 1 times
...
Cerakoted
3 weeks, 5 days ago
Selected Answer: B
Answer is B One of requirement - the developer [must minimize the cost of additional resources] that support the deployment.
upvoted 2 times
...
dilleman
3 weeks, 5 days ago
Selected Answer: D
I vote for D since the requirement is to minimize the costs of resources. Blue/green is a good and safe way to solve this but it costs more resources than an Immutable rollout. Immutable: Launches a new set of instances in a new temporary environment to ensure that the new version works as expected. Once the new version is verified, traffic is rerouted to the new set of instances, and the old instances are terminated. This method maintains full capacity, avoids service interruptions, and minimizes the cost compared to blue/green deployments since the overlap in running resources is shorter.
upvoted 2 times
...
Digo30sp
1 month ago
Selected Answer: C
The correct answer is (C). The blue/green deployment method is the best option to meet the developer's requirements. Blue/green allows the developer to deploy a new version of the application without service interruption. This is done by creating a blue production environment and a green production environment. The blue environment is the current production environment and the green environment is the new version of the application. The developer can then test the new version of the application in the green environment before putting it into production.
upvoted 1 times
...
Question #162 Topic 1

A developer has observed an increase in bugs in the AWS Lambda functions that a development team has deployed in its Node.js application. To minimize these bugs, the developer wants to implement automated testing of Lambda functions in an environment that closely simulates the Lambda environment.

The developer needs to give other developers the ability to run the tests locally. The developer also needs to integrate the tests into the team’s continuous integration and continuous delivery (CI/CD) pipeline before the AWS Cloud Development Kit (AWS CDK) deployment.

Which solution will meet these requirements?

  • A. Create sample events based on the Lambda documentation. Create automated test scripts that use the cdk local invoke command to invoke the Lambda functions. Check the response. Document the test scripts for the other developers on the team. Update the CI/CD pipeline to run the test scripts.
  • B. Install a unit testing framework that reproduces the Lambda execution environment. Create sample events based on the Lambda documentation. Invoke the handler function by using a unit testing framework. Check the response. Document how to run the unit testing framework for the other developers on the team. Update the CI/CD pipeline to run the unit testing framework.
  • C. Install the AWS Serverless Application Model (AWS SAM) CLI tool. Use the sam local generate-event command to generate sample events for the automated tests. Create automated test scripts that use the sam local invoke command to invoke the Lambda functions. Check the response. Document the test scripts for the other developers on the team. Update the CI/CD pipeline to run the test scripts.
  • D. Create sample events based on the Lambda documentation. Create a Docker container from the Node.js base image to invoke the Lambda functions. Check the response. Document how to run the Docker container for the other developers on the team. Update the CI/CD pipeline to run the Docker container.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: C
C should be correct
upvoted 2 times
...
Digo30sp
1 month ago
Selected Answer: C
The correct answer is (C). Solution (C) is the best option to meet the developer's requirements. The AWS SAM CLI tool provides an easy way to generate sample events and invoke Lambda functions locally. The solution is also easy to document and integrate into the CI/CD pipeline.
upvoted 3 times
...
Question #163 Topic 1

A developer is troubleshooting an application that uses Amazon DynamoDB in the us-west-2 Region. The application is deployed to an Amazon EC2 instance. The application requires read-only permissions to a table that is named Cars. The EC2 instance has an attached IAM role that contains the following IAM policy:



When the application tries to read from the Cars table, an Access Denied error occurs.

How can the developer resolve this error?

  • A. Modify the IAM policy resource to be “arn:aws:dynamodb:us-west-2:account-id:table/*”.
  • B. Modify the IAM policy to include the dynamodb:* action.
  • C. Create a trust policy that specifies the EC2 service principal. Associate the role with the policy.
  • D. Create a trust relationship between the role and dynamodb.amazonaws.com.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
C (75%)
D (25%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
LemonGremlin
2 weeks, 4 days ago
Selected Answer: C
The most reasonable answer here is C. But I think the question is missing some information. https://aws.amazon.com/blogs/security/how-to-use-trust-policies-with-iam-roles/
upvoted 1 times
...
PrakashM14
2 weeks, 5 days ago
Selected Answer: D
D.Create a trust relationship between the role and dynamodb.amazonaws.com. Explanation: Trust Relationship: In AWS, a trust relationship defines who or what entity can assume a role. In this case, the role attached to the EC2 instance needs to trust DynamoDB. The trust relationship is specified in a JSON policy document. DynamoDB Service Principal: The correct service principal for DynamoDB is dynamodb.amazonaws.com. This is the entity that the role needs to trust to allow access to DynamoDB resources.
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: C
https://www.examtopics.com/discussions/amazon/view/96497-exam-aws-certified-developer-associate-topic-1-question-380/
upvoted 2 times
...
Question #164 Topic 1

When using the AWS Encryption SDK, how does the developer keep track of the data encryption keys used to encrypt data?

  • A. The developer must manually keep track of the data encryption keys used for each data object.
  • B. The SDK encrypts the data encryption key and stores it (encrypted) as part of the returned ciphertext.
  • C. The SDK stores the data encryption keys automatically in Amazon S3.
  • D. The data encryption key is stored in the Userdata for the EC2 instance.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: B
B is correct
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: B
https://www.examtopics.com/discussions/amazon/view/96427-exam-aws-certified-developer-associate-topic-1-question-398/
upvoted 1 times
...
Question #165 Topic 1

An application that runs on AWS Lambda requires access to specific highly confidential objects in an Amazon S3 bucket. In accordance with the principle of least privilege, a company grants access to the S3 bucket by using only temporary credentials.

How can a developer configure access to the S3 bucket in the MOST secure way?

  • A. Hardcode the credentials that are required to access the S3 objects in the application code. Use the credentials to access the required S3 objects.
  • B. Create a secret access key and access key ID with permission to access the S3 bucket. Store the key and key ID in AWS Secrets Manager. Configure the application to retrieve the Secrets Manager secret and use the credentials to access the S3 objects.
  • C. Create a Lambda function execution role. Attach a policy to the role that grants access to specific objects in the S3 bucket.
  • D. Create a secret access key and access key ID with permission to access the S3 bucket. Store the key and key ID as environment variables in Lambda. Use the environment variables to access the required S3 objects.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
C (63%)
B (38%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
LemonGremlin
2 weeks, 6 days ago
Selected Answer: C
C. Create a Lambda function execution role. Attach a policy to the role that grants access to specific objects in the S3 bucket.
upvoted 1 times
...
dilleman
3 weeks, 5 days ago
Selected Answer: C
C should be correct: https://docs.aws.amazon.com/lambda/latest/operatorguide/least-privilege.html
upvoted 4 times
...
Digo30sp
1 month ago
Selected Answer: B
The correct answer is (B). Option (B) is the most secure way to configure S3 bucket access because the credentials are stored in a safe and secure location. AWS Secrets Manager uses public key cryptography to protect stored secrets.
upvoted 3 times
dezoito
3 weeks ago
B goes against the least privilege principle beacuse it gives access to the whole bucket
upvoted 2 times
...
...
Question #166 Topic 1

A developer has code that is stored in an Amazon S3 bucket. The code must be deployed as an AWS Lambda function across multiple accounts in the same AWS Region as the S3 bucket. An AWS CloudFormation template that runs for each account will deploy the Lambda function.

What is the MOST secure way to allow CloudFormation to access the Lambda code in the S3 bucket?

  • A. Grant the CloudFormation service role the S3 ListBucket and GetObject permissions. Add a bucket policy to Amazon S3 with the principal of “AWS”: [account numbers].
  • B. Grant the CloudFormation service role the S3 GetObject permission. Add a bucket policy to Amazon S3 with the principal of “*”.
  • C. Use a service-based link to grant the Lambda function the S3 ListBucket and GetObject permissions by explicitly adding the S3 bucket’s account number in the resource.
  • D. Use a service-based link to grant the Lambda function the S3 GetObject permission. Add a resource of “*” to allow access to the S3 bucket.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Digo30sp
1 month ago
Selected Answer: A
The correct answer is (A). Option (A) is the safest way to allow CloudFormation to access the Lambda code in the S3 bucket because it limits access to the specific accounts that need to deploy the Lambda functions. The bucket policy grants S3 ListBucket and GetObject permissions to the CloudFormation service role only for the accounts specified in the principal.
upvoted 2 times
...
Question #167 Topic 1

A developer at a company needs to create a small application that makes the same API call once each day at a designated time. The company does not have infrastructure in the AWS Cloud yet, but the company wants to implement this functionality on AWS.

Which solution meets these requirements in the MOST operationally efficient manner?

  • A. Use a Kubernetes cron job that runs on Amazon Elastic Kubernetes Service (Amazon EKS).
  • B. Use an Amazon Linux crontab scheduled job that runs on Amazon EC2.
  • C. Use an AWS Lambda function that is invoked by an Amazon EventBridge scheduled event.
  • D. Use an AWS Batch job that is submitted to an AWS Batch job queue.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: C
C is correct
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: C
https://www.examtopics.com/discussions/amazon/view/88703-exam-aws-certified-developer-associate-topic-1-question-229/
upvoted 1 times
...
Question #168 Topic 1

A developer is building a serverless application that is based on AWS Lambda. The developer initializes the AWS software development kit (SDK) outside of the Lambda handler function.

What is the PRIMARY benefit of this action?

  • A. Improves legibility and stylistic convention
  • B. Takes advantage of runtime environment reuse
  • C. Provides better error handling
  • D. Creates a new SDK instance for each invocation
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: B
B it is!
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: B
The correct answer is (B). Initializing the AWS SDK outside of the Lambda handler function takes advantage of runtime environment reuse. This means that the SDK only needs to be initialized once for all Lambda function invocations. This can improve application performance and efficiency.
upvoted 2 times
...
Question #169 Topic 1

A company is using Amazon RDS as the backend database for its application. After a recent marketing campaign, a surge of read requests to the database increased the latency of data retrieval from the database. The company has decided to implement a caching layer in front of the database. The cached content must be encrypted and must be highly available.

Which solution will meet these requirements?

  • A. Amazon CloudFront
  • B. Amazon ElastiCache for Memcached
  • C. Amazon ElastiCache for Redis in cluster mode
  • D. Amazon DynamoDB Accelerator (DAX)
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: C
Should be C since ElastiCache for Redis supports encryption at rest and in transit. ElastiCache for Memcached does not support encryption at rest. DynamoDB Accelerator is for DynamoDB and does not fit this case.
upvoted 3 times
...
Digo30sp
1 month ago
Selected Answer: C
https://www.examtopics.com/discussions/amazon/view/82917-exam-aws-certified-developer-associate-topic-1-question-95/
upvoted 1 times
...
Question #170 Topic 1

A developer at a company recently created a serverless application to process and show data from business reports. The application’s user interface (UI) allows users to select and start processing the files. The UI displays a message when the result is available to view. The application uses AWS Step Functions with AWS Lambda functions to process the files. The developer used Amazon API Gateway and Lambda functions to create an API to support the UI.

The company’s UI team reports that the request to process a file is often returning timeout errors because of the size or complexity of the files. The UI team wants the API to provide an immediate response so that the UI can display a message while the files are being processed. The backend process that is invoked by the API needs to send an email message when the report processing is complete.

What should the developer do to configure the API to meet these requirements?

  • A. Change the API Gateway route to add an X-Amz-Invocation-Type header with a static value of ‘Event’ in the integration request. Deploy the API Gateway stage to apply the changes.
  • B. Change the configuration of the Lambda function that implements the request to process a file. Configure the maximum age of the event so that the Lambda function will run asynchronously.
  • C. Change the API Gateway timeout value to match the Lambda function timeout value. Deploy the API Gateway stage to apply the changes.
  • D. Change the API Gateway route to add an X-Amz-Target header with a static value of ‘Async’ in the integration request. Deploy the API Gateway stage to apply the changes.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (60%)
D (40%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
LemonGremlin
2 weeks, 4 days ago
Selected Answer: A
Reference: https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-integration-async.html
upvoted 1 times
...
kashtelyan
2 weeks, 5 days ago
Selected Answer: D
Option A involves changing the API Gateway route to add an X-Amz-Invocation-Type header with a static value of 'Event' in the integration request. This header is typically used when you want to invoke a Lambda function asynchronously, but it doesn't ensure that you get an immediate response. It essentially sends the request to a queue for asynchronous execution and doesn't wait for the processing to complete before providing a response. In contrast, option D suggests using the X-Amz-Target header with a static value of 'Async,' which is a more appropriate choice when you need to provide an immediate response to the client while offloading the processing for background execution. This approach better aligns with the requirement of displaying a message to the user while the files are being processed, which is typically achieved through asynchronous processing with notification upon completion.
upvoted 2 times
...
Digo30sp
1 month ago
Selected Answer: A
A) https://www.examtopics.com/discussions/amazon/view/82655-exam-aws-certified-developer-associate-topic-1-question-85/
upvoted 2 times
...
fordiscussionstwo
1 month ago
aaaaaaaaaAAAAAAAAAAAAAAAA
upvoted 2 times
...
Question #171 Topic 1

A developer has an application that is composed of many different AWS Lambda functions. The Lambda functions all use some of the same dependencies. To avoid security issues, the developer is constantly updating the dependencies of all of the Lambda functions. The result is duplicated effort for each function.

How can the developer keep the dependencies of the Lambda functions up to date with the LEAST additional complexity?

  • A. Define a maintenance window for the Lambda functions to ensure that the functions get updated copies of the dependencies.
  • B. Upgrade the Lambda functions to the most recent runtime version.
  • C. Define a Lambda layer that contains all of the shared dependencies.
  • D. Use an AWS CodeCommit repository to host the dependencies in a centralized location.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: C
C is correct.
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: C
C) https://www.examtopics.com/discussions/amazon/view/96245-exam-aws-certified-developer-associate-topic-1-question-436/
upvoted 1 times
...
Question #172 Topic 1

A mobile app stores blog posts in an Amazon DynamoDB table. Millions of posts are added every day, and each post represents a single item in the table. The mobile app requires only recent posts. Any post that is older than 48 hours can be removed.

What is the MOST cost-effective way to delete posts that are older than 48 hours?

  • A. For each item, add a new attribute of type String that has a timestamp that is set to the blog post creation time. Create a script to find old posts with a table scan and remove posts that are older than 48 hours by using the BatchWriteItem API operation. Schedule a cron job on an Amazon EC2 instance once an hour to start the script.
  • B. For each item, add a new attribute of type String that has a timestamp that is set to the blog post creation time. Create a script to find old posts with a table scan and remove posts that are older than 48 hours by using the BatchWriteItem API operation. Place the script in a container image. Schedule an Amazon Elastic Container Service (Amazon ECS) task on AWS Fargate that invokes the container every 5 minutes.
  • C. For each item, add a new attribute of type Date that has a timestamp that is set to 48 hours after the blog post creation time. Create a global secondary index (GSI) that uses the new attribute as a sort key. Create an AWS Lambda function that references the GSI and removes expired items by using the BatchWriteItem API operation. Schedule the function with an Amazon CloudWatch event every minute.
  • D. For each item, add a new attribute of type Number that has a timestamp that is set to 48 hours after the blog post creation time. Configure the DynamoDB table with a TTL that references the new attribute.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: D
D is correct. DynamoDB tables can clean up data itself based on provided configuration.
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: D
The correct answer is (D). Solution (D) is the most cost-effective because it uses DynamoDB's Time to Live (TTL) to automatically remove expired items. The TTL is an item attribute that specifies the duration of time that an item should remain in the table. When an item's TTL expires, the item is automatically deleted from the table.
upvoted 1 times
...
Question #173 Topic 1

A developer is modifying an existing AWS Lambda function. While checking the code, the developer notices hardcoded parameter values for an Amazon RDS for SQL Server user name, password, database, host, and port. There are also hardcoded parameter values for an Amazon DynamoDB table, an Amazon S3 bucket, and an Amazon Simple Notification Service (Amazon SNS) topic.

The developer wants to securely store the parameter values outside the code in an encrypted format and wants to turn on rotation for the credentials. The developer also wants to be able to reuse the parameter values from other applications and to update the parameter values without modifying code.

Which solution will meet these requirements with the LEAST operational overhead?

  • A. Create an RDS database secret in AWS Secrets Manager. Set the user name, password, database, host, and port. Turn on secret rotation. Create encrypted Lambda environment variables for the DynamoDB table, S3 bucket, and SNS topic.
  • B. Create an RDS database secret in AWS Secrets Manager. Set the user name, password, database, host, and port. Turn on secret rotation. Create SecureString parameters in AWS Systems Manager Parameter Store for the DynamoDB table, S3 bucket, and SNS topic.
  • C. Create RDS database parameters in AWS Systems Manager Parameter Store for the user name, password, database, host, and port. Create encrypted Lambda environment variables for the DynamoDB table, S3 bucket, and SNS topic. Create a Lambda function and set the logic for the credentials rotation task. Schedule the credentials rotation task in Amazon EventBridge.
  • D. Create RDS database parameters in AWS Systems Manager Parameter Store for the user name, password, database, host, and port. Store the DynamoDB table, S3 bucket, and SNS topic in Amazon S3. Create a Lambda function and set the logic for the credentials rotation. Invoke the Lambda function on a schedule.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: B
B is correct
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: B
B) https://www.examtopics.com/discussions/amazon/view/88929-exam-aws-certified-developer-associate-topic-1-question-338/
upvoted 1 times
...
Question #174 Topic 1

A developer accesses AWS CodeCommit over SSH. The SSH keys configured to access AWS CodeCommit are tied to a user with the following permissions:



The developer needs to create/delete branches.

Which specific IAM permissions need to be added, based on the principle of least privilege?

  • A. "codecommit:CreateBranch"
    "codecommit:DeleteBranch"
  • B. "codecommit:Put*"
  • C. "codecommit:Update*"
  • D. "codecommit:*"
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: A
A of course
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: A
A) https://www.examtopics.com/discussions/amazon/view/4364-exam-aws-certified-developer-associate-topic-1-question-190/
upvoted 1 times
...
Question #175 Topic 1

An application that is deployed to Amazon EC2 is using Amazon DynamoDB. The application calls the DynamoDB REST API. Periodically, the application receives a ProvisionedThroughputExceededException error when the application writes to a DynamoDB table.

Which solutions will mitigate this error MOST cost-effectively? (Choose two.)

  • A. Modify the application code to perform exponential backoff when the error is received.
  • B. Modify the application to use the AWS SDKs for DynamoDB.
  • C. Increase the read and write throughput of the DynamoDB table.
  • D. Create a DynamoDB Accelerator (DAX) cluster for the DynamoDB table.
  • E. Create a second DynamoDB table. Distribute the reads and writes between the two tables.
Reveal Solution Hide Solution

Correct Answer: AB 🗳️

Community vote distribution
AB (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: AB
A and B. Exponential backoff is a standard error-handling strategy for network applications. The idea is to retry a failed request with increasing delays between each attempt. And the AWS SDKs have built-in support for handling these errors.
upvoted 2 times
...
Digo30sp
1 month ago
Selected Answer: AB
A and B: https://www.examtopics.com/discussions/amazon/view/69199-exam-aws-certified-developer-associate-topic-1-question-385/
upvoted 2 times
...
Question #176 Topic 1

When a developer tries to run an AWS CodeBuild project, it raises an error because the length of all environment variables exceeds the limit for the combined maximum of characters.

What is the recommended solution?

  • A. Add the export LC_ALL="en_US.utf8" command to the pre_build section to ensure POSIX localization.
  • B. Use Amazon Cognito to store key-value pairs for large numbers of environment variables.
  • C. Update the settings for the build project to use an Amazon S3 bucket for large numbers of environment variables.
  • D. Use AWS Systems Manager Parameter Store to store large numbers of environment variables.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: D
Best solution is D
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: D
D) https://docs.aws.amazon.com/codebuild/latest/userguide/troubleshooting.html
upvoted 1 times
...
Question #177 Topic 1

A company is expanding the compatibility of its photo-sharing mobile app to hundreds of additional devices with unique screen dimensions and resolutions. Photos are stored in Amazon S3 in their original format and resolution. The company uses an Amazon CloudFront distribution to serve the photos. The app includes the dimension and resolution of the display as GET parameters with every request.

A developer needs to implement a solution that optimizes the photos that are served to each device to reduce load time and increase photo quality.

Which solution will meet these requirements MOST cost-effectively?

  • A. Use S3 Batch Operations to invoke an AWS Lambda function to create new variants of the photos with the required dimensions and resolutions. Create a dynamic CloudFront origin that automatically maps the request of each device to the corresponding photo variant.
  • B. Use S3 Batch Operations to invoke an AWS Lambda function to create new variants of the photos with the required dimensions and resolutions. Create a Lambda@Edge function to route requests to the corresponding photo variant by using request headers.
  • C. Create a Lambda@Edge function that optimizes the photos upon request and returns the photos as a response. Change the CloudFront TTL cache policy to the maximum value possible.
  • D. Create a Lambda@Edge function that optimizes the photos upon request and returns the photos as a response. In the same function, store a copy of the processed photos on Amazon S3 for subsequent requests.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
jingle4944
18 hours, 52 minutes ago
According to https://aws.amazon.com/blogs/networking-and-content-delivery/resizing-images-with-amazon-cloudfront-lambdaedge-aws-cdn-blog/, "static resources like images should have a long Time to Live (TTL) as possible to improve cache-hit ratios.". The photo cache here is likely to be static and should be preserved forever.
upvoted 1 times
...
ut18
1 week, 3 days ago
Why not B? The developer can use S3 Batch Operations to create new variants of the photos with the required dimensions and resolutions.
upvoted 1 times
...
TallManDan
2 weeks, 6 days ago
Selected Answer: D
You only want to convert the pictures that get requests. If you convert them all through batch processing, you have wasted time and expense on any possible photo that never gets viewed. The Minimum TTL is set to 60 seconds, the Default TTL is set to 300 seconds, and the Maximum TTL is set to 3600 seconds. S3 is the way to go.
upvoted 2 times
...
dilleman
3 weeks, 5 days ago
Selected Answer: D
D is correct
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: D
D) https://www.examtopics.com/discussions/amazon/view/89564-exam-aws-certified-developer-associate-topic-1-question-320/
upvoted 1 times
...
Question #178 Topic 1

A company is building an application for stock trading. The application needs sub-millisecond latency for processing trade requests. The company uses Amazon DynamoDB to store all the trading data that is used to process each trading request.

A development team performs load testing on the application and finds that the data retrieval time is higher than expected. The development team needs a solution that reduces the data retrieval time with the least possible effort.

Which solution meets these requirements?

  • A. Add local secondary indexes (LSIs) for the trading data.
  • B. Store the trading data in Amazon S3, and use S3 Transfer Acceleration.
  • C. Add retries with exponential backoff for DynamoDB queries.
  • D. Use DynamoDB Accelerator (DAX) to cache the trading data.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: D
This is a perfect scenario for DAX so correct answer is D
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: D
D) https://www.examtopics.com/discussions/amazon/view/4971-exam-aws-certified-developer-associate-topic-1-question-14/
upvoted 1 times
...
Question #179 Topic 1

A developer is working on a Python application that runs on Amazon EC2 instances. The developer wants to enable tracing of application requests to debug performance issues in the code.

Which combination of actions should the developer take to achieve this goal? (Choose two.)

  • A. Install the Amazon CloudWatch agent on the EC2 instances.
  • B. Install the AWS X-Ray daemon on the EC2 instances.
  • C. Configure the application to write JSON-formatted logs to /var/log/cloudwatch.
  • D. Configure the application to write trace data to /var/log/xray.
  • E. Install and configure the AWS X-Ray SDK for Python in the application.
Reveal Solution Hide Solution

Correct Answer: CE 🗳️

Community vote distribution
BE (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
NinjaCloud
1 week ago
Answer: E,B
upvoted 1 times
...
dilleman
3 weeks, 5 days ago
Selected Answer: BE
B and E
upvoted 2 times
...
Digo30sp
1 month ago
Selected Answer: BE
The correct answers are (E) and (B). (E) is the most important action to enable application request tracking using AWS X-Ray. The AWS X-Ray SDK for Python provides a set of APIs that a developer can use to instrument their application code for tracing. (B) is the second most important action. The AWS X-Ray daemon runs on each EC2 instance and collects application trace data
upvoted 2 times
...
Question #180 Topic 1

A company has an application that runs as a series of AWS Lambda functions. Each Lambda function receives data from an Amazon Simple Notification Service (Amazon SNS) topic and writes the data to an Amazon Aurora DB instance.

To comply with an information security policy, the company must ensure that the Lambda functions all use a single securely encrypted database connection string to access Aurora.

Which solution will meet these requirements?

  • A. Use IAM database authentication for Aurora to enable secure database connections for all the Lambda functions.
  • B. Store the credentials and read the credentials from an encrypted Amazon RDS DB instance.
  • C. Store the credentials in AWS Systems Manager Parameter Store as a secure string parameter.
  • D. Use Lambda environment variables with a shared AWS Key Management Service (AWS KMS) key for encryption.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
C (44%)
D (33%)
A (22%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
TallManDan
2 weeks, 6 days ago
Selected Answer: A
https://aws.amazon.com/blogs/database/iam-role-based-authentication-to-amazon-aurora-from-serverless-applications/
upvoted 2 times
...
dilleman
3 weeks, 5 days ago
Selected Answer: C
C. AWS Systems Manager Parameter Store offers a more centralized way to manage encrypted secrets across multiple services than Lambda environment variables, making it a better fit for this scenario.
upvoted 4 times
...
Digo30sp
1 month ago
Selected Answer: D
The correct answer is (D). Solution (D) is the best option because it uses Lambda environment variables with an AWS Key Management Service (AWS KMS) shared key for encryption.
upvoted 3 times
...
Question #181 Topic 1

A developer is troubleshooting an Amazon API Gateway API. Clients are receiving HTTP 400 response errors when the clients try to access an endpoint of the API.

How can the developer determine the cause of these errors?

  • A. Create an Amazon Kinesis Data Firehose delivery stream to receive API call logs from API Gateway. Configure Amazon CloudWatch Logs as the delivery stream’s destination.
  • B. Turn on AWS CloudTrail Insights and create a trail. Specify the Amazon Resource Name (ARN) of the trail for the stage of the API.
  • C. Turn on AWS X-Ray for the API stage. Create an Amazon CloudWatch Logs log group. Specify the Amazon Resource Name (ARN) of the log group for the API stage.
  • D. Turn on execution logging and access logging in Amazon CloudWatch Logs for the API stage. Create a CloudWatch Logs log group. Specify the Amazon Resource Name (ARN) of the log group for the API stage.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dezoito
2 weeks, 4 days ago
D according to https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-logging.html
upvoted 1 times
...
dilleman
3 weeks, 5 days ago
Selected Answer: D
D should be correct
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: D
D) https://www.examtopics.com/discussions/amazon/view/88807-exam-aws-certified-developer-associate-topic-1-question-264/
upvoted 1 times
...
Question #182 Topic 1

A company developed an API application on AWS by using Amazon CloudFront, Amazon API Gateway, and AWS Lambda. The API has a minimum of four requests every second. A developer notices that many API users run the same query by using the POST method. The developer wants to cache the POST request to optimize the API resources.

Which solution will meet these requirements?

  • A. Configure the CloudFront cache. Update the application to return cached content based upon the default request headers.
  • B. Override the cache method in the selected stage of API Gateway. Select the POST method.
  • C. Save the latest request response in Lambda /tmp directory. Update the Lambda function to check the /tmp directory.
  • D. Save the latest request in AWS Systems Manager Parameter Store. Modify the Lambda function to take the latest request response from Parameter Store.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (89%)
11%

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Jing2023
3 weeks, 3 days ago
Selected Answer: B
Why A is not correct Amazon CloudFront does not cache the responses to POST, PUT, DELETE, and PATCH requests – these requests are proxied back to the origin server. You may enable caching for the responses to OPTIONS requests.
upvoted 3 times
...
kr5031
3 weeks, 4 days ago
Selected Answer: B
A is incorrect, because of CloudFront always caches responses to GET and HEAD requests. You can also configure CloudFront to cache responses to OPTIONS requests. CloudFront does not cache responses to requests that use the other methods. (https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RequestAndResponseBehaviorCustomOrigin.html)
upvoted 3 times
dilleman
3 weeks, 3 days ago
I agree, I think B is correct as well looking into it more.
upvoted 2 times
...
...
dilleman
3 weeks, 5 days ago
Selected Answer: A
A is the correct answer here. CloudFront can be configured to cache based on request headers, query strings, and POST request bodies. Option B might work but it does not work by default and it's not an effective way to solve this.
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: B
The correct answer is (B). Solution (B) is the best option because it uses the Amazon API Gateway cache to cache POST requests.
upvoted 2 times
...
Question #183 Topic 1

A company is building a microservices application that consists of many AWS Lambda functions. The development team wants to use AWS Serverless Application Model (AWS SAM) templates to automatically test the Lambda functions. The development team plans to test a small percentage of traffic that is directed to new updates before the team commits to a full deployment of the application.

Which combination of steps will meet these requirements in the MOST operationally efficient way? (Choose two.)

  • A. Use AWS SAM CLI commands in AWS CodeDeploy to invoke the Lambda functions to test the deployment.
  • B. Declare the EventInvokeConfig on the Lambda functions in the AWS SAM templates with OnSuccess and OnFailure configurations.
  • C. Enable gradual deployments through AWS SAM templates.
  • D. Set the deployment preference type to Canary10Percent30Minutes. Use hooks to test the deployment.
  • E. Set the deployment preference type to Linear10PercentEvery10Minutes. Use hooks to test the deployment.
Reveal Solution Hide Solution

Correct Answer: BD 🗳️

Community vote distribution
CD (80%)
CE (20%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
PrakashM14
5 days, 21 hours ago
Selected Answer: CD
C. Enable gradual deployments through AWS SAM templates. D. Set the deployment preference type to Canary10Percent30Minutes. Use hooks to test the deployment.
upvoted 1 times
...
dilleman
3 weeks, 5 days ago
Selected Answer: CD
C and D should be correct. Given that "The development team plans to test a small percentage of traffic that is directed to new updates before the team commits to a full deployment of the application." then Option D makes more sense than Option E.
upvoted 3 times
...
Digo30sp
1 month ago
Selected Answer: CE
The correct answers are (C) and (E). (C) is the most important step because it allows you to deploy new Lambda function updates to a small percentage of your traffic. (E) is the second most important step because it allows you to test new Lambda function updates using hooks.
upvoted 1 times
...
Question #184 Topic 1

A company is using AWS CloudFormation to deploy a two-tier application. The application will use Amazon RDS as its backend database. The company wants a solution that will randomly generate the database password during deployment. The solution also must automatically rotate the database password without requiring changes to the application.

What is the MOST operationally efficient solution that meets these requirements?

  • A. Use an AWS Lambda function as a CloudFormation custom resource to generate and rotate the password.
  • B. Use an AWS Systems Manager Parameter Store resource with the SecureString data type to generate and rotate the password.
  • C. Use a cron daemon on the application’s host to generate and rotate the password.
  • D. Use an AWS Secrets Manager resource to generate and rotate the password.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: D
D is correct
upvoted 2 times
...
Digo30sp
1 month ago
Selected Answer: D
D) https://www.examtopics.com/discussions/amazon/view/88814-exam-aws-certified-developer-associate-topic-1-question-270/
upvoted 2 times
...
Question #185 Topic 1

A developer has been asked to create an AWS Lambda function that is invoked any time updates are made to items in an Amazon DynamoDB table. The function has been created, and appropriate permissions have been added to the Lambda execution role. Amazon DynamoDB streams have been enabled for the table, but the function is still not being invoked.

Which option would enable DynamoDB table updates to invoke the Lambda function?

  • A. Change the StreamViewType parameter value to NEW_AND_OLD_IMAGES for the DynamoDB table.
  • B. Configure event source mapping for the Lambda function.
  • C. Map an Amazon Simple Notification Service (Amazon SNS) topic to the DynamoDB streams.
  • D. Increase the maximum runtime (timeout) setting of the Lambda function.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: B
B is the only option that makes sense here
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: B
B) https://www.examtopics.com/discussions/amazon/view/4365-exam-aws-certified-developer-associate-topic-1-question-35/#
upvoted 1 times
...
Question #186 Topic 1

A developer needs to deploy an application running on AWS Fargate using Amazon ECS. The application has environment variables that must be passed to a container for the application to initialize.

How should the environment variables be passed to the container?

  • A. Define an array that includes the environment variables under the environment parameter within the service definition.
  • B. Define an array that includes the environment variables under the environment parameter within the task definition.
  • C. Define an array that includes the environment variables under the entryPoint parameter within the task definition.
  • D. Define an array that includes the environment variables under the entryPoint parameter within the service definition.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: B
B is correct
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: B
B) https://www.examtopics.com/discussions/amazon/view/28795-exam-aws-certified-developer-associate-topic-1-question-108/
upvoted 2 times
...
Question #187 Topic 1

A development team maintains a web application by using a single AWS RDS, template. The template defines web servers and an Amazon RDS database. The team uses the CloudFormation template to deploy the CloudFormation stack to different environments.

During a recent application deployment, a developer caused the primary development database to be dropped and recreated. The result of this incident was a loss of data. The team needs to avoid accidental database deletion in the future.

Which solutions will meet these requirements? (Choose two.)

  • A. Add a CloudFormation DeletionPolicy attribute with the Retain value to the database resource.
  • B. Update the CloudFormation stack policy to prevent updates to the database.
  • C. Modify the database to use a Multi-AZ deployment.
  • D. Create a CloudFormation stack set for the web application and database deployments.
  • E. Add a CloudFormation DeletionPolicy attribute with the Retain value to the stack.
Reveal Solution Hide Solution

Correct Answer: AB 🗳️

Community vote distribution
AB (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Gold07
2 weeks, 6 days ago
The answer is A and D
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: AB
A and B) https://www.examtopics.com/discussions/amazon/view/103521-exam-aws-certified-developer-associate-dva-c02-topic-1/#
upvoted 2 times
...
Question #188 Topic 1

A developer is storing sensitive data generated by an application in Amazon S3. The developer wants to encrypt the data at rest. A company policy requires an audit trail of when the AWS Key Management Service (AWS KMS) key was used and by whom.

Which encryption option will meet these requirements?

  • A. Server-side encryption with Amazon S3 managed keys (SSE-S3)
  • B. Server-side encryption with AWS KMS managed keys (SSE-KMS)
  • C. Server-side encryption with customer-provided keys (SSE-C)
  • D. Server-side encryption with self-managed keys
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: B
B, since we need an audit trail of the AWK KMS key then this is the one to use.
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: B
B) https://www.examtopics.com/discussions/amazon/view/28801-exam-aws-certified-developer-associate-topic-1-question-217/
upvoted 1 times
...
Question #189 Topic 1

A company has an ecommerce application. To track product reviews, the company’s development team uses an Amazon DynamoDB table.

Every record includes the following:

• A Review ID, a 16-digit universally unique identifier (UUID)
• A Product ID and User ID, 16-digit UUIDs that reference other tables
• A Product Rating on a scale of 1-5
• An optional comment from the user

The table partition key is the Review ID. The most performed query against the table is to find the 10 reviews with the highest rating for a given product.

Which index will provide the FASTEST response for this query?

  • A. A global secondary index (GSI) with Product ID as the partition key and Product Rating as the sort key
  • B. A global secondary index (GSI) with Product ID as the partition key and Review ID as the sort key
  • C. A local secondary index (LSI) with Product ID as the partition key and Product Rating as the sort key
  • D. A local secondary index (LSI) with Review ID as the partition key and Product ID as the sort key
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: A
A should be correct
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: A
A) https://www.examtopics.com/discussions/amazon/view/88995-exam-aws-certified-developer-associate-topic-1-question-362/
upvoted 1 times
...
Question #190 Topic 1

A company needs to distribute firmware updates to its customers around the world.

Which service will allow easy and secure control of the access to the downloads at the lowest cost?

  • A. Use Amazon CloudFront with signed URLs for Amazon S3.
  • B. Create a dedicated Amazon CloudFront Distribution for each customer.
  • C. Use Amazon CloudFront with AWS Lambda@Edge.
  • D. Use Amazon API Gateway and AWS Lambda to control access to an S3 bucket.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: A
A is correct
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: A
A) https://www.examtopics.com/discussions/amazon/view/8792-exam-aws-certified-developer-associate-topic-1-question-179/#
upvoted 1 times
...
Question #191 Topic 1

A developer is testing an application that invokes an AWS Lambda function asynchronously. During the testing phase, the Lambda function fails to process after two retries.

How can the developer troubleshoot the failure?

  • A. Configure AWS CloudTrail logging to investigate the invocation failures.
  • B. Configure Dead Letter Queues by sending events to Amazon SQS for investigation.
  • C. Configure Amazon Simple Workflow Service to process any direct unprocessed events.
  • D. Configure AWS Config to process any direct unprocessed events.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: B
Dead Letter Queues (DLQ) can be configured for Lambda functions to capture failed asynchronous invocations. Events that cannot be processed will be sent to an SQS queue (or an SNS topic) you specify, allowing for further investigation and reprocessing.
upvoted 2 times
...
Digo30sp
1 month ago
Selected Answer: B
B) https://www.examtopics.com/discussions/amazon/view/28638-exam-aws-certified-developer-associate-topic-1-question-317/#
upvoted 2 times
...
Question #192 Topic 1

A company is migrating its PostgreSQL database into the AWS Cloud. The company wants to use a database that will secure and regularly rotate database credentials. The company wants a solution that does not require additional programming overhead.

Which solution will meet these requirements?

  • A. Use Amazon Aurora PostgreSQL for the database. Store the database credentials in AWS Systems Manager Parameter Store. Turn on rotation.
  • B. Use Amazon Aurora PostgreSQL for the database. Store the database credentials in AWS Secrets Manager. Turn on rotation.
  • C. Use Amazon DynamoDB for the database. Store the database credentials in AWS Systems Manager Parameter Store. Turn on rotation.
  • D. Use Amazon DynamoDB for the database. Store the database credentials in AWS Secrets Manager. Turn on rotation.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Digo30sp
1 month ago
Selected Answer: B
B) The correct answer is (B). Solution (B) is the best option because it meets all the requirements: Using a database that secures and regularly changes database credentials: Amazon Aurora PostgreSQL offers built-in credential rotation, which allows you to change database credentials at regular intervals. Solution that requires no additional programming overhead: Amazon Aurora PostgreSQL credential rotation is fully automated, so it requires no additional programming overhead.
upvoted 3 times
...
Question #193 Topic 1

A developer is creating a mobile application that will not require users to log in.

What is the MOST efficient method to grant users access to AWS resources?

  • A. Use an identity provider to securely authenticate with the application.
  • B. Create an AWS Lambda function to create an IAM user when a user accesses the application.
  • C. Create credentials using AWS KMS and apply these credentials to users when using the application.
  • D. Use Amazon Cognito to associate unauthenticated users with an IAM role that has limited access to resources.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Digo30sp
1 month ago
Selected Answer: D
D) https://www.examtopics.com/discussions/amazon/view/4245-exam-aws-certified-developer-associate-topic-1-question-79/
upvoted 2 times
...
Question #194 Topic 1

A company has developed a new serverless application using AWS Lambda functions that will be deployed using the AWS Serverless Application Model (AWS SAM) CLI.

Which step should the developer complete prior to deploying the application?

  • A. Compress the application to a .zip file and upload it into AWS Lambda.
  • B. Test the new AWS Lambda function by first tracing it in AWS X-Ray.
  • C. Bundle the serverless application using a SAM package.
  • D. Create the application environment using the eb create my-env command.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: C
C is correct
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: C
C) https://www.examtopics.com/discussions/amazon/view/28650-exam-aws-certified-developer-associate-topic-1-question-312/
upvoted 2 times
...
Question #195 Topic 1

A company wants to automate part of its deployment process. A developer needs to automate the process of checking for and deleting unused resources that supported previously deployed stacks but that are no longer used.

The company has a central application that uses the AWS Cloud Development Kit (AWS CDK) to manage all deployment stacks. The stacks are spread out across multiple accounts. The developer’s solution must integrate as seamlessly as possible within the current deployment process.

Which solution will meet these requirements with the LEAST amount of configuration?

  • A. In the central AWS CDK application, write a handler function in the code that uses AWS SDK calls to check for and delete unused resources. Create an AWS CloudFormation template from a JSON file. Use the template to attach the function code to an AWS Lambda function and to invoke the Lambda function when the deployment stack runs.
  • B. In the central AWS CDK application, write a handler function in the code that uses AWS SDK calls to check for and delete unused resources. Create an AWS CDK custom resource. Use the custom resource to attach the function code to an AWS Lambda function and to invoke the Lambda function when the deployment stack runs.
  • C. In the central AWS CDK, write a handler function in the code that uses AWS SDK calls to check for and delete unused resources. Create an API in AWS Amplify. Use the API to attach the function code to an AWS Lambda function and to invoke the Lambda function when the deployment stack runs.
  • D. In the AWS Lambda console, write a handler function in the code that uses AWS SDK calls to check for and delete unused resources. Create an AWS CDK custom resource. Use the custom resource to import the Lambda function into the stack and to invoke the Lambda function when the deployment stack runs.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: B
B. In the central AWS CDK application, write a handler function in the code that uses AWS SDK calls to check for and delete unused resources. Create an AWS CDK custom resource. Use the custom resource to attach the function code to an AWS Lambda function and to invoke the Lambda function when the deployment stack runs.
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: B
The correct answer is (B). Solution (B) is the best option because: Requires the LEAST amount of configuration: Solution (B) uses an AWS CDK custom resource, which is a type of resource that can be defined in AWS CDK code. Custom resources are a convenient way to add custom functionality to your AWS CloudFormation stacks. Integrates seamlessly into the current deployment process: Solution (B) uses the AWS CDK custom resource to attach function code to an AWS Lambda function and to invoke the Lambda function when the deployment stack runs. This means that the solution does not require any changes to the existing AWS CDK code.
upvoted 2 times
...
Question #196 Topic 1

A company built a new application in the AWS Cloud. The company automated the bootstrapping of new resources with an Auto Scaling group by using AWS CloudFormation templates. The bootstrap scripts contain sensitive data.

The company needs a solution that is integrated with CloudFormation to manage the sensitive data in the bootstrap scripts.

Which solution will meet these requirements in the MOST secure way?

  • A. Put the sensitive data into a CloudFormation parameter. Encrypt the CloudFormation templates by using an AWS Key Management Service (AWS KMS) key.
  • B. Put the sensitive data into an Amazon S3 bucket. Update the CloudFormation templates to download the object from Amazon S3 during bootstrap.
  • C. Put the sensitive data into AWS Systems Manager Parameter Store as a secure string parameter. Update the CloudFormation templates to use dynamic references to specify template values.
  • D. Put the sensitive data into Amazon Elastic File System (Amazon EFS). Enforce EFS encryption after file system creation. Update the CloudFormation templates to retrieve data from Amazon EFS.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
C (80%)
A (20%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
kashtelyan
2 weeks, 5 days ago
Selected Answer: A
A option leverages CloudFormation parameters, which can securely store sensitive data. By using an AWS KMS key to encrypt the CloudFormation templates, you ensure that the sensitive data is protected. It follows the principle of least privilege and provides secure access to sensitive information directly within CloudFormation. Option B is less secure because it involves storing sensitive data in an S3 bucket, which could be compromised. Option C suggests using AWS Systems Manager Parameter Store, which is secure, but using CloudFormation parameters and KMS keys provides an integrated solution directly within CloudFormation. Option D involves Amazon EFS, which is typically used for file storage and is not designed for securely storing sensitive data directly within CloudFormation.
upvoted 1 times
...
dilleman
3 weeks, 5 days ago
Selected Answer: C
C is the correct choice. Parameter Store's secure string parameter encrypts the data using AWS KMS
upvoted 2 times
...
Digo30sp
1 month ago
Selected Answer: C
The correct answer is (C). Solution (C) is the best option because: It's the most secure solution: Sensitive data is stored in AWS Systems Manager Parameter Store, which is a secret management service managed by AWS. Secure string parameters in AWS Systems Manager Parameter Store are encrypted with an AWS KMS key. It's integrated with CloudFormation: Secure string parameters can be referenced in CloudFormation templates using dynamic references. This means that sensitive data does not need to be stored in CloudFormation code.
upvoted 2 times
...
Question #197 Topic 1

A company needs to set up secure database credentials for all its AWS Cloud resources. The company’s resources include Amazon RDS DB instances, Amazon DocumentDB clusters, and Amazon Aurora DB instances. The company’s security policy mandates that database credentials be encrypted at rest and rotated at a regular interval.

Which solution will meet these requirements MOST securely?

  • A. Set up IAM database authentication for token-based access. Generate user tokens to provide centralized access to RDS DB instances, Amazon DocumentDB clusters, and Aurora DB instances.
  • B. Create parameters for the database credentials in AWS Systems Manager Parameter Store. Set the Type parameter to SecureString. Set up automatic rotation on the parameters.
  • C. Store the database access credentials as an encrypted Amazon S3 object in an S3 bucket. Block all public access on the S3 bucket. Use S3 server-side encryption to set up automatic rotation on the encryption key.
  • D. Create an AWS Lambda function by using the SecretsManagerRotationTemplate template in the AWS Secrets Manager console. Create secrets for the database credentials in Secrets Manager. Set up secrets rotation on a schedule.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
nickolaj
2 weeks, 3 days ago
https://aws.amazon.com/blogs/security/rotate-amazon-rds-database-credentials-automatically-with-aws-secrets-manager/
upvoted 1 times
...
dilleman
3 weeks, 5 days ago
Selected Answer: D
the best and most secure option is: D. Create an AWS Lambda function by using the SecretsManagerRotationTemplate template in the AWS Secrets Manager console.
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: D
The correct answer is (D). Solution (D) is the best option because: It's the most secure solution: AWS Secrets Manager is an AWS-managed secrets management service that provides encryption at rest and automatic secret rotation. Meets the company's security requirements: The solution meets the company's security requirements because: Database credentials are encrypted at rest using AWS Key Management Service (AWS KMS). Database credentials are automatically rotated at regular intervals.
upvoted 1 times
...
fordiscussionstwo
1 month ago
DDDDDDD
upvoted 2 times
...
Question #198 Topic 1

A developer has created an AWS Lambda function that makes queries to an Amazon Aurora MySQL DB instance. When the developer performs a test, the DB instance shows an error for too many connections.

Which solution will meet these requirements with the LEAST operational effort?

  • A. Create a read replica for the DB instance. Query the replica DB instance instead of the primary DB instance.
  • B. Migrate the data to an Amazon DynamoDB database.
  • C. Configure the Amazon Aurora MySQL DB instance for Multi-AZ deployment.
  • D. Create a proxy in Amazon RDS Proxy. Query the proxy instead of the DB instance.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: D
D. RDS Proxy sits between the application and the database to manage and pool connections, reducing the chance of exhausting database connections when many Lambda functions try to connect simultaneously.
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: D
D) https://www.examtopics.com/discussions/amazon/view/88969-exam-aws-certified-developer-associate-topic-1-question-358/
upvoted 1 times
...
fordiscussionstwo
1 month ago
DDDDDDDDDDD
upvoted 3 times
...
Question #199 Topic 1

A developer is creating a new REST API by using Amazon API Gateway and AWS Lambda. The development team tests the API and validates responses for the known use cases before deploying the API to the production environment.

The developer wants to make the REST API available for testing by using API Gateway locally.

Which AWS Serverless Application Model Command Line Interface (AWS SAM CLI) subcommand will meet these requirements?

  • A. Sam local invoke
  • B. Sam local generate-event
  • C. Sam local start-lambda
  • D. Sam local start-api
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: D
D is correct
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: D
The correct answer is (D). The AWS SAM CLI sam local start-api subcommand is used to start a local API Gateway instance. This allows you to test your REST API locally before deploying it to the production environment. The other subcommands will not meet the developer's requirements: Local invocation of Sam is used to invoke a Lambda function locally. Sam's local event generation is used to generate a local event file to be used to invoke a Lambda function locally. Sam local start-lambda is used to start a local instance of a Lambda function.
upvoted 2 times
...
fordiscussionstwo
1 month ago
DDDDDDDDDDD
upvoted 3 times
...
Question #200 Topic 1

A company has a serverless application on AWS that uses a fleet of AWS Lambda functions that have aliases. The company regularly publishes new Lambda function by using an in-house deployment solution. The company wants to improve the release process and to use traffic shifting. A newly published function version should initially make available only to a fixed percentage of production users.

Which solution will meet these requirements?

  • A. Configure routing on the alias of the new function by using a weighted alias.
  • B. Configure a canary deployment type for Lambda.
  • C. Configure routing on the new versions by using environment variables.
  • D. Configure a linear deployment type for Lambda.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Digo30sp
1 month ago
Selected Answer: A
The correct answer is (A). Weighted aliases allow you to route traffic to different versions of a function based on weights that you assign. This allows you to implement a canary deployment, where you initially route a small percentage of your traffic to the new version of the function, and then gradually increase the percentage as you gain confidence in the new version.
upvoted 1 times
...
fordiscussionstwo
1 month ago
AAAAAAAAAAA
upvoted 2 times
...
Question #201 Topic 1

A company has an application that stores data in Amazon RDS instances. The application periodically experiences surges of high traffic that cause performance problems. During periods of peak traffic, a developer notices a reduction in query speed in all database queries.

The team’s technical lead determines that a multi-threaded and scalable caching solution should be used to offload the heavy read traffic. The solution needs to improve performance.

Which solution will meet these requirements with the LEAST complexity?

  • A. Use Amazon ElastiCache for Memcached to offload read requests from the main database.
  • B. Replicate the data to Amazon DynamoDSet up a DynamoDB Accelerator (DAX) cluster.
  • C. Configure the Amazon RDS instances to use Multi-AZ deployment with one standby instance. Offload read requests from the main database to the standby instance.
  • D. Use Amazon ElastiCache for Redis to offload read requests from the main database.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
kashtelyan
3 weeks, 6 days ago
Selected Answer: A
When deciding between Memcached and Redis, here are a few questions to consider: Is object caching your primary goal, for example to offload your database? If so, use Memcached. https://docs.aws.amazon.com/whitepapers/latest/scale-performance-elasticache/memcached-vs.-redis.html
upvoted 2 times
...
Digo30sp
1 month ago
Selected Answer: A
The correct answer is (A). Amazon ElastiCache for Memcached is a scalable, multithreaded caching solution that can be used to offload heavy read traffic from Amazon RDS instances. ElastiCache for Memcached is easy to configure and manage, making it a low-effort solution to meet technical lead requirements.
upvoted 2 times
...
fordiscussionstwo
1 month ago
AAAAAAAAA
upvoted 2 times
...
Question #202 Topic 1

A developer must provide an API key to an AWS Lambda function to authenticate with a third-party system. The Lambda function will run on a schedule. The developer needs to ensure that the API key remains encrypted at rest.

Which solution will meet these requirements?

  • A. Store the API key as a Lambda environment variable by using an AWS Key Management Service (AWS KMS) customer managed key.
  • B. Configure the application to prompt the user to provide the password to the Lambda function on the first run.
  • C. Store the API key as a value in the application code.
  • D. Use Lambda@Edge and only communicate over the HTTPS protocol.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Digo30sp
1 month ago
Selected Answer: A
The correct answer is (A). Storing the API key as a Lambda environment variable using an AWS Key Management Service (AWS KMS) customer-managed key is the most secure solution. AWS KMS is a managed encryption service that provides customer-managed keys. Customer-managed keys are encrypted with an AWS KMS master key, which is stored in an AWS KMS vault.
upvoted 3 times
...
fordiscussionstwo
1 month ago
AAAAAAAAAA
upvoted 1 times
...
Question #203 Topic 1

An IT department uses Amazon S3 to store sensitive images. After more than 1 year, the company moves the images into archival storage. The company rarely accesses the images, but the company wants a storage solution that maximizes resiliency. The IT department needs access to the images that have been moved to archival storage within 24 hours.

Which solution will meet these requirements MOST cost-effectively?

  • A. Use S3 Standard-Infrequent Access (S3 Standard-IA) to store the images. Use S3 Glacier Deep Archive with standard retrieval to store and retrieve archived images.
  • B. Use S3 Standard-Infrequent Access (S3 Standard-IA) to store the images. Use S3 Glacier Deep Archive with bulk retrieval to store and retrieve archived images.
  • C. Use S3 Intelligent-Tiering to store the images. Use S3 Glacier Deep Archive with standard retrieval to store and retrieve archived images.
  • D. Use S3 One Zone-Infrequent Access (S3 One Zone-IA) to store the images. Use S3 Glacier Deep Archive with bulk retrieval to store and retrieve archived images.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
A (80%)
C (20%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
hcsaba1982
1 week, 5 days ago
Selected Answer: C
A : Glacier Deep Archive is cheaper than Standard-IA. C : Standard archival is 12h. B : bulk retrieval is 48h D : S3 One Zone-IA - cross-out due to "maximizes resiliency"
upvoted 1 times
ut18
5 days, 18 hours ago
Check the requirement : The IT department needs access to the images that have been moved to archival storage within 24 hours.
upvoted 1 times
...
...
Learning4life
3 weeks, 1 day ago
A is correct. The requirement of maximizing resiliency rules out One Zone. Standard recover is within 12 hours, which fits the requirement of within 24 hours. https://docs.aws.amazon.com/AmazonS3/latest/userguide/restoring-objects-retrieval-options.html
upvoted 2 times
...
Cerakoted
3 weeks, 5 days ago
Selected Answer: A
It is A
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: A
A) Correct A) because the standard recovery is carried out within 12 hours and the requirement says that it must be recovered within 24 hours. Bulk recovery takes up to 48 hours
upvoted 3 times
...
fordiscussionstwo
1 month ago
BBBBBBBBBB
upvoted 1 times
...
Question #204 Topic 1

A developer is building a serverless application by using the AWS Serverless Application Model (AWS SAM). The developer is currently testing the application in a development environment. When the application is nearly finished, the developer will need to set up additional testing and staging environments for a quality assurance team.

The developer wants to use a feature of the AWS SAM to set up deployments to multiple environments.

Which solution will meet these requirements with the LEAST development effort?

  • A. Add a configuration file in TOML format to group configuration entries to every environment. Add a table for each testing and staging environment. Deploy updates to the environments by using the sam deploy command and the --config-env flag that corresponds to each environment.
  • B. Create additional AWS SAM templates for each testing and staging environment. Write a custom shell script that uses the sam deploy command and the --template-file flag to deploy updates to the environments.
  • C. Create one AWS SAM configuration file that has default parameters. Perform updates to the testing and staging environments by using the --parameter-overrides flag in the AWS SAM CLI and the parameters that the updates will override.
  • D. Use the existing AWS SAM template. Add additional parameters to configure specific attributes for the serverless function and database table resources that are in each environment. Deploy updates to the testing and staging environments by using the sam deploy command.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
A (50%)
C (25%)
D (25%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
NinjaCloud
5 days, 13 hours ago
Correct Answer: C, You can create a single AWS SAM configuration file with default parameters and then use the --parameter-overrides flag with the AWS SAM CLI to specify parameters that override the defaults for each testing and staging environment. This approach keeps the AWS SAM template file (the infrastructure-as-code) consistent and minimizes duplication. It's a clean and simple way to manage multiple environments without having to create separate templates or custom scripts.
upvoted 1 times
...
Rameez1
2 weeks, 3 days ago
Selected Answer: C
Here all the options can do the Job but option C does it with least effort.
upvoted 1 times
...
PrakashM14
2 weeks, 3 days ago
Selected Answer: C
Options A and B introduce additional complexities such as configuration files in TOML format or writing custom shell scripts. These might require more effort and maintenance. Option D involves adding additional parameters to the existing AWS SAM template, which can work but may lead to a more complex and less maintainable template as the number of environments grows. Therefore, option C is a straightforward and efficient solution for deploying to multiple environments with AWS SAM.
upvoted 1 times
...
Jing2023
3 weeks, 2 days ago
Selected Answer: A
A should be correct reference this stackoverflow post https://stackoverflow.com/questions/68826108/how-to-deploy-to-different-environments-with-aws-sam
upvoted 2 times
...
dilleman
3 weeks, 5 days ago
Selected Answer: A
A is correct
upvoted 2 times
...
Digo30sp
1 month ago
Selected Answer: D
The correct answer is (D). Using the existing AWS SAM template is the option that requires the LEAST development effort. To configure deployments across multiple environments, you can add additional parameters to your AWS SAM template to configure specific attributes for the serverless function and database table resources that are in each environment.
upvoted 2 times
...
fordiscussionstwo
1 month ago
AAAAAAAAAA
upvoted 2 times
...
Question #205 Topic 1

A developer is working on an application that processes operating data from IoT devices. Each IoT device uploads a data file once every hour to an Amazon S3 bucket. The developer wants to immediately process each data file when the data file is uploaded to Amazon S3.

The developer will use an AWS Lambda function to process the data files from Amazon S3. The Lambda function is configured with the S3 bucket information where the files are uploaded. The developer wants to configure the Lambda function to immediately invoke after each data file is uploaded.

Which solution will meet these requirements?

  • A. Add an asynchronous invocation to the Lambda function. Select the S3 bucket as the source.
  • B. Add an Amazon EventBridge event to the Lambda function. Select the S3 bucket as the source.
  • C. Add a trigger to the Lambda function. Select the S3 bucket as the source.
  • D. Add a layer to the Lambda function. Select the S3 bucket as the source.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: C
C is correct
upvoted 2 times
...
Digo30sp
1 month ago
Selected Answer: C
The correct answer is (C). Adding a trigger to your Lambda function is the solution that will meet these requirements. A trigger is an event that can invoke a Lambda function. In the case of this issue, the trigger must be an Amazon S3 event that fires when a new file is uploaded to the bucket.
upvoted 2 times
...
fordiscussionstwo
1 month ago
CCCCCCCCCCCCCC
upvoted 3 times
...
Question #206 Topic 1

A developer is setting up infrastructure by using AWS CloudFormation. If an error occurs when the resources described in the Cloud Formation template are provisioned, successfully provisioned resources must be preserved. The developer must provision and update the CloudFormation stack by using the AWS CLI.

Which solution will meet these requirements?

  • A. Add an --enable-termination-protection command line option to the create-stack command and the update-stack command.
  • B. Add a --disable-rollback command line option to the create-stack command and the update-stack command.
  • C. Add a --parameters ParameterKey=PreserveResources,ParameterValue=True command line option to the create-stack command and the update-stack command.
  • D. Add a --tags Key=PreserveResources,Value=True command line option to the create-stack command and the update-stack command.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: B
B is correct
upvoted 2 times
...
kashtelyan
3 weeks, 6 days ago
Selected Answer: B
https://www.cloudhesive.com/blog-posts/cloudformation-disable-rollback/
upvoted 3 times
...
Digo30sp
1 month ago
Selected Answer: B
The correct answer is (B). The --disable-rollback command-line option will prevent CloudFormation from rolling back the stack to the previous state if an error occurs. This will ensure that successfully provisioned resources are preserved.
upvoted 3 times
...
fordiscussionstwo
1 month ago
BBBBBBBBBBBBBBBBB
upvoted 2 times
...
Question #207 Topic 1

A developer is building a serverless application that connects to an Amazon Aurora PostgreSQL database. The serverless application consists of hundreds of AWS Lambda functions. During every Lambda function scale out, a new database connection is made that increases database resource consumption.

The developer needs to decrease the number of connections made to the database. The solution must not impact the scalability of the Lambda functions.

Which solution will meet these requirements?

  • A. Configure provisioned concurrency for each Lambda function by setting the ProvisionedConcurrentExecutions parameter to 10.
  • B. Enable cluster cache management for Aurora PostgreSQL. Change the connection string of each Lambda function to point to cluster cache management.
  • C. Use Amazon RDS Proxy to create a connection pool to manage the database connections. Change the connection string of each Lambda function to reference the proxy.
  • D. Configure reserved concurrency for each Lambda function by setting the ReservedConcurrentExecutions parameter to 10.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: C
C: Amazon RDS Proxy is designed to improve application scalability and resilience by pooling and reusing database connections. This can significantly reduce the number of connections each Lambda function has to establish
upvoted 3 times
...
Digo30sp
1 month ago
Selected Answer: C
The correct answer is (C). Amazon RDS Proxy is a solution that allows you to create a connection pool to manage database connections. This can help reduce the number of connections made to the database.
upvoted 1 times
...
fordiscussionstwo
1 month ago
CCCCCCCCCCCCCCC
upvoted 2 times
...
Question #208 Topic 1

A developer is preparing to begin development of a new version of an application. The previous version of the application is deployed in a production environment. The developer needs to deploy fixes and updates to the current version during the development of the new version of the application. The code for the new version of the application is stored in AWS CodeCommit.

Which solution will meet these requirements?

  • A. From the main branch, create a feature branch for production bug fixes. Create a second feature branch from the main branch for development of the new version.
  • B. Create a Git tag of the code that is currently deployed in production. Create a Git tag for the development of the new version. Push the two tags to the CodeCommit repository.
  • C. From the main branch, create a branch of the code that is currently deployed in production. Apply an IAM policy that ensures no other users can push or merge to the branch.
  • D. Create a new CodeCommit repository for development of the new version of the application. Create a Git tag for the development of the new version.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dilleman
3 weeks, 5 days ago
Selected Answer: A
A is a common code version control strategy
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: A
A resposta correta é (A). Criar uma ramificação de recursos para correções de bugs de produção e uma segunda ramificação de recursos para desenvolvimento da nova versão é a solução que atenderá a esses requisitos. A primeira ramificação de recursos pode ser usada para corrigir bugs ou implementar atualizações para a versão atual do aplicativo. A segunda ramificação de recursos pode ser usada para desenvolver a nova versão do aplicativo.
upvoted 1 times
...
fordiscussionstwo
1 month ago
AAAAAAAAAAAAAA
upvoted 2 times
...
Question #209 Topic 1

A developer is creating an AWS CloudFormation stack. The stack contains IAM resources with custom names. When the developer tries to deploy the stack, they receive an InsufficientCapabilities error.

What should the developer do to resolve this issue?

  • A. Specify the CAPABILITY_AUTO_EXPAND capability in the CloudFormation stack.
  • B. Use an administrators role to deploy IAM resources with CloudFormation.
  • C. Specify the CAPABILITY_IAM capability in the CloudFormation stack.
  • D. Specify the CAPABILITY_NAMED_IAM capability in the CloudFormation stack.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Learning4life
3 weeks, 1 day ago
D. If you have IAM resources with custom names, you must specify CAPABILITY_NAMED_IAM. See more details in this link https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_CreateStack.html
upvoted 1 times
...
dilleman
3 weeks, 5 days ago
Selected Answer: D
D is correct
upvoted 1 times
...
Patel_ajay745
1 month ago
CCC ccccccc
upvoted 1 times
...
Digo30sp
1 month ago
Selected Answer: D
The correct answer is (D). To deploy IAM resources with custom names, you must specify the CAPABILITY_NAMED_IAM resource in the CloudFormation stack. The CAPABILITY_IAM resource allows CloudFormation to create and modify IAM resources. The CAPABILITY_NAMED_IAM resource allows CloudFormation to create IAM resources with custom names. To resolve the issue, the developer must specify the CAPABILITY_NAMED_IAM resource in the CloudFormation stack.
upvoted 3 times
...
fordiscussionstwo
1 month ago
DDDDDDDDDD
upvoted 2 times
...
Question #210 Topic 1

A company uses Amazon API Gateway to expose a set of APIs to customers. The APIs have caching enabled in API Gateway. Customers need a way to invalidate the cache for each API when they test the API.

What should a developer do to give customers the ability to invalidate the API cache?

  • A. Ask the customers to use AWS credentials to call the InvalidateCache API operation.
  • B. Attach an InvalidateCache policy to the IAM execution role that the customers use to invoke the API. Ask the customers to send a request that contains the Cache-Control:max-age=0 HTTP header when they make an API call.
  • C. Ask the customers to use the AWS SDK API Gateway class to invoke the InvalidateCache API operation.
  • D. Attach an InvalidateCache policy to the IAM execution role that the customers use to invoke the API. Ask the customers to add the INVALIDATE_CACHE query string parameter when they make an API call.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dezoito
2 weeks, 4 days ago
Seems to be B but policies/roles have nothing to do with cache
upvoted 1 times
...
Patel_ajay745
1 month ago
it is DDDDDD
upvoted 1 times
fordiscussionstwo
4 weeks, 1 day ago
why? because chatGPDUMP said that? all your anwers are wrong.
upvoted 2 times
...
...
Digo30sp
1 month ago
Selected Answer: B
B) https://www.examtopics.com/discussions/amazon/view/4166-exam-aws-certified-developer-associate-topic-1-question-69/
upvoted 3 times
...
fordiscussionstwo
1 month ago
BBBBBBBBBBBBBB
upvoted 3 times
...
Question #211 Topic 1

A developer is creating an AWS Lambda function that will generate and export a file. The function requires 100 MB of temporary storage for temporary files while running. These files will not be needed after the function is complete.

How can the developer MOST efficiently handle the temporary files?

  • A. Store the files in Amazon Elastic Block Store (Amazon EBS) and delete the files at the end of the Lambda function.
  • B. Copy the files to Amazon Elastic File System (Amazon EFS) and delete the files at the end of the Lambda function.
  • C. Store the files in the /tmp directory and delete the files at the end of the Lambda function.
  • D. Copy the files to an Amazon S3 bucket with a lifecycle policy to delete the files.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Claire_KMT
1 week, 3 days ago
C. Store the files in the /tmp directory and delete the files at the end of the Lambda function. The /tmp directory is a dedicated temporary storage location provided by AWS Lambda for storing temporary files during the execution of the function. It's cost-effective and efficient because it doesn't involve additional AWS services or storage costs. AWS Lambda automatically manages the /tmp directory for you, including clearing its contents after the function execution is complete. You don't need to explicitly delete the files; Lambda takes care of it.
upvoted 1 times
...
LemonGremlin
1 week, 3 days ago
Selected Answer: C
Option C is the best choice for efficient handling of temporary files within an AWS Lambda function.
upvoted 1 times
...
Question #212 Topic 1

A company uses Amazon DynamoDB as a data store for its order management system. The company frontend application stores orders in a DynamoDB table. The DynamoDB table is configured to send change events to a DynamoDB stream. The company uses an AWS Lambda function to log and process the incoming orders based on data from the DynamoDB stream.

An operational review reveals that the order quantity of incoming orders is sometimes set to 0. A developer needs to create a dashboard that will show how many unique customers this problem affects each day.

What should the developer do to implement the dashboard?

  • A. Grant the Lambda function’s execution role permissions to upload logs to Amazon CloudWatch Logs. Implement a CloudWatch Logs Insights query that selects the number of unique customers for orders with order quantity equal to 0 and groups the results in 1-day periods. Add the CloudWatch Logs Insights query to a CloudWatch dashboard.
  • B. Use Amazon Athena to query AWS CloudTrail API logs for API calls. Implement an Athena query that selects the number of unique customers for orders with order quantity equal to 0 and groups the results in 1-day periods. Add the Athena query to an Amazon CloudWatch dashboard.
  • C. Configure the Lambda function to send events to Amazon EventBridge. Create an EventBridge rule that groups the number of unique customers for orders with order quantity equal to 0 in 1-day periods. Add a CloudWatch dashboard as the target of the rule.
  • D. Turn on custom Amazon CloudWatch metrics for the DynamoDB stream of the DynamoDB table. Create a CloudWatch alarm that groups the number of unique customers for orders with order quantity equal to 0 in 1-day periods. Add the CloudWatch alarm to a CloudWatch dashboard.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
PrakashM14
6 days, 1 hour ago
Selected Answer: D
Option A suggests using CloudWatch Logs Insights, which is typically used for analyzing log data. However, in this scenario, the issue is related to metrics (order quantity), and using CloudWatch Metrics and Alarms is a more suitable approach. I'd go with option D. It seems like a more direct and efficient approach. By using custom CloudWatch metrics for the DynamoDB stream, you can specifically track the relevant data without the need for additional CloudWatch Logs Insights queries. The alarm will then allow you to easily visualize and monitor the number of unique customers affected by the issue each day on the CloudWatch dashboard.
upvoted 1 times
...
AngelinaWatson
1 week, 2 days ago
i Passed The Exam By by This Website Thankyou et lots of lov https://www.passexam4sure.com/
upvoted 1 times
...
Claire_KMT
1 week, 3 days ago
A. Grant the Lambda function’s execution role permissions to upload logs to Amazon CloudWatch Logs. Implement a CloudWatch Logs Insights query that selects the number of unique customers for orders with order quantity equal to 0 and groups the results in 1-day periods. Add the CloudWatch Logs Insights query to a CloudWatch dashboard. Here's why this option is the best choice: CloudWatch Logs Insights is designed for querying and analyzing log data, making it well-suited for this task. By configuring the Lambda function's execution role to upload logs to CloudWatch Logs, you ensure that the log data is available for analysis. You can use a CloudWatch Logs Insights query to identify unique customers for orders with a quantity of 0 and group the results by day, providing the desired daily count of affected customers. The results of the query can be added to a CloudWatch dashboard, making it easily accessible for monitoring.
upvoted 1 times
...
Question #213 Topic 1

A developer needs to troubleshoot an AWS Lambda function in a development environment. The Lambda function is configured in VPC mode and needs to connect to an existing Amazon RDS for SQL Server DB instance. The DB instance is deployed in a private subnet and accepts connections by using port 1433.

When the developer tests the function, the function reports an error when it tries to connect to the database.

Which combination of steps should the developer take to diagnose this issue? (Choose two.)

  • A. Check that the function’s security group has outbound access on port 1433 to the DB instance’s security group. Check that the DB instance’s security group has inbound access on port 1433 from the function’s security group.
  • B. Check that the function’s security group has inbound access on port 1433 from the DB instance’s security group. Check that the DB instance’s security group has outbound access on port 1433 to the function’s security group.
  • C. Check that the VPC is set up for a NAT gateway. Check that the DB instance has the public access option turned on.
  • D. Check that the function’s execution role permissions include rds:DescribeDBInstances, rds:ModifyDBInstance. and rds:DescribeDBSecurityGroups for the DB instance.
  • E. Check that the function’s execution role permissions include ec2:CreateNetworkInterface, ec2:DescribeNetworkInterfaces, and ec2:DeleteNetworkInterface.
Reveal Solution Hide Solution

Correct Answer: AC 🗳️

Community vote distribution
AD (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Jing2023
1 week, 2 days ago
Selected Answer: AD
A and D
upvoted 3 times
...
mitch151
1 week, 2 days ago
I believe It's A and D. Unsure on A, but D seems to be confirmed by this link: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/lambda-rds-connect.html
upvoted 3 times
...
Claire_KMT
1 week, 2 days ago
A and B
upvoted 1 times
...
Question #214 Topic 1

A developer needs to launch a new Amazon EC2 instance by using the AWS CLI.

Which AWS CLI command should the developer use to meet this requirement?

  • A. aws ec2 bundle-instance
  • B. aws ec2 start-instances
  • C. aws ec2 confirm-product-instance
  • D. aws ec2 run-instances
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Claire_KMT
1 week, 2 days ago
D. aws ec2 run-instances So, to create a new EC2 instance using the AWS CLI, you would typically use the aws ec2 run-instances command, providing the necessary parameters such as the AMI ID, instance type, security groups, and key pair, among others.
upvoted 2 times
...
Question #215 Topic 1

A developer needs to manage AWS infrastructure as code and must be able to deploy multiple identical copies of the infrastructure, stage changes, and revert to previous versions.

Which approach addresses these requirements?

  • A. Use cost allocation reports and AWS OpsWorks to deploy and manage the infrastructure.
  • B. Use Amazon CloudWatch metrics and alerts along with resource tagging to deploy and manage the infrastructure.
  • C. Use AWS Elastic Beanstalk and AWS CodeCommit to deploy and manage the infrastructure.
  • D. Use AWS CloudFormation and AWS CodeCommit to deploy and manage the infrastructure.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Jing2023
1 week, 2 days ago
Selected Answer: D
this is the only option mentioning infra as code.
upvoted 1 times
...
Claire_KMT
1 week, 2 days ago
D. Use AWS CloudFormation and AWS CodeCommit to deploy and manage the infrastructure. Here's why this is the most appropriate choice: AWS CloudFormation: It allows you to define your infrastructure as code using templates, which can be version-controlled. You can create, update, and delete stacks of AWS resources in a controlled and predictable manner. This aligns with the requirement to deploy multiple identical copies of the infrastructure, stage changes, and revert to previous versions. AWS CodeCommit: It provides a fully managed source control service, allowing you to store and version-control your CloudFormation templates. This ensures that you can manage and track changes to your infrastructure configurations.
upvoted 2 times
...
Question #216 Topic 1

A developer is working on an AWS Lambda function that accesses Amazon DynamoDB. The Lambda function must retrieve an item and update some of its attributes, or create the item if it does not exist. The Lambda function has access to the primary key.

Which IAM permissions should the developer request for the Lambda function to achieve this functionality?

  • A. dynamodb:DeleleItem
    dynamodb:GetItem
    dynamodb:PutItem
  • B. dynamodb:UpdateItem
    dynamodb:GetItem
    dynamodb:DescribeTable
  • C. dynamodb:GetRecords
    dynamodb:PutItem
    dynamodb:UpdateTable
  • D. dynamodb:UpdateItem
    dynamodb:GetItem
    dynamodb:PutItem
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Claire_KMT
1 week, 2 days ago
D. dynamodb:UpdateItem, dynamodb:GetItem, and dynamodb:PutItem Here's why: dynamodb:GetItem: This permission allows the Lambda function to retrieve an item from DynamoDB. dynamodb:UpdateItem: This permission allows the Lambda function to update the attributes of an item in DynamoDB. dynamodb:PutItem: This permission allows the Lambda function to create a new item if it doesn't already exist in the DynamoDB table.
upvoted 2 times
...
didorins
1 week, 3 days ago
Selected Answer: D
PutItem is to CREATE new item or replace old item with new item GetItem is to retrieve an item UpdateItem so to update the attributes Hence answer D
upvoted 1 times
...
Question #217 Topic 1

A developer has built a market application that stores pricing data in Amazon DynamoDB with Amazon ElastiCache in front. The prices of items in the market change frequently. Sellers have begun complaining that, after they update the price of an item, the price does not actually change in the product listing.

What could be causing this issue?

  • A. The cache is not being invalidated when the price of the item is changed.
  • B. The price of the item is being retrieved using a write-through ElastiCache cluster.
  • C. The DynamoDB table was provisioned with insufficient read capacity.
  • D. The DynamoDB table was provisioned with insufficient write capacity.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Claire_KMT
1 week, 2 days ago
A. The cache is not being invalidated when the price of the item is changed. In a caching setup using Amazon ElastiCache in front of Amazon DynamoDB, if the cache is not being invalidated or updated when data in DynamoDB is changed, it can result in stale data being served from the cache, leading to the observed behavior. To resolve this issue, you should implement a mechanism to invalidate or update the cache whenever the price of an item is changed in DynamoDB to ensure that the most up-to-date data is retrieved from the cache or DynamoDB.
upvoted 2 times
...
Question #218 Topic 1

A company requires that all applications running on Amazon EC2 use IAM roles to gain access to AWS services. A developer is modifying an application that currently relies on IAM user access keys stored in environment variables to access Amazon DynamoDB tables using boto, the AWS SDK for Python.

The developer associated a role with the same permissions as the IAM user to the EC2 instance, then deleted the IAM user. When the application was restarted, the AWS AccessDeniedException messages started appearing in the application logs. The developer was able to use their personal account on the server to run DynamoDB API commands using the AWS CLI.

What is the MOST likely cause of the exception?

  • A. IAM policies might take a few minutes to propagate to resources.
  • B. Disabled environment variable credentials are still being used by the application.
  • C. The AWS SDK does not support credentials obtained using an instance role.
  • D. The instance’s security group does not allow access to http://169.254.169.254.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Community vote distribution
B (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Claire_KMT
1 week, 2 days ago
B. Disabled environment variable credentials are still being used by the application.
upvoted 1 times
...
didorins
1 week, 3 days ago
Selected Answer: B
B is the only viable answer.
upvoted 1 times
...
Question #219 Topic 1

A company has an existing application that has hardcoded database credentials. A developer needs to modify the existing application. The application is deployed in two AWS Regions with an active-passive failover configuration to meet company’s disaster recovery strategy.

The developer needs a solution to store the credentials outside the code. The solution must comply with the company’s disaster recovery strategy.

Which solution will meet these requirements in the MOST secure way?

  • A. Store the credentials in AWS Secrets Manager in the primary Region. Enable secret replication to the secondary Region. Update the application to use the Amazon Resource Name (ARN) based on the Region.
  • B. Store credentials in AWS Systems Manager Parameter Store in the primary Region. Enable parameter replication to the secondary Region. Update the application to use the Amazon Resource Name (ARN) based on the Region.
  • C. Store credentials in a config file. Upload the config file to an S3 bucket in the primary Region. Enable Cross-Region Replication (CRR) to an S3 bucket in the secondary region. Update the application to access the config file from the S3 bucket, based on the Region.
  • D. Store credentials in a config file. Upload the config file to an Amazon Elastic File System (Amazon EFS) file system. Update the application to use the Amazon EFS file system Regional endpoints to access the config file in the primary and secondary Regions.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Claire_KMT
1 week, 2 days ago
B. Store credentials in AWS Systems Manager Parameter Store in the primary Region. Enable parameter replication to the secondary Region. Update the application to use the Amazon Resource Name (ARN) based on the Region.
upvoted 1 times
...
didorins
1 week, 3 days ago
Selected Answer: A
https://docs.aws.amazon.com/secretsmanager/latest/userguide/create-manage-multi-region-secrets.html
upvoted 4 times
...
Question #220 Topic 1

A developer is receiving HTTP 400: ThrottlingException errors intermittently when calling the Amazon CloudWatch API. When a call fails, no data is retrieved.

What best practice should first be applied to address this issue?

  • A. Contact AWS Support for a limit increase.
  • B. Use the AWS CLI to get the metrics.
  • C. Analyze the applications and remove the API call.
  • D. Retry the call with exponential backoff.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (67%)
A (33%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
vruizrob
1 week ago
D. Retries with exponential backoff; operation with an exponentially increasing wait time
upvoted 1 times
...
Claire_KMT
1 week, 2 days ago
D. Retry the call with exponential backoff.
upvoted 2 times
...
didorins
1 week, 3 days ago
Selected Answer: D
Because examtopic won't allow me to modify my previous answer to use the correct option. Exponential Backoff is D
upvoted 2 times
...
didorins
1 week, 3 days ago
Selected Answer: A
You are doing too many requests. Try less frequent with exponential backoff.
upvoted 1 times
...
Question #221 Topic 1

An application needs to use the IP address of the client in its processing. The application has been moved into AWS and has been placed behind an Application Load Balancer (ALB). However, all the client IP addresses now appear to be the same. The application must maintain the ability to scale horizontally.

Based on this scenario, what is the MOST cost-effective solution to this problem?

  • A. Remove the application from the ALB. Delete the ALB and change Amazon Route 53 to direct traffic to the instance running the application.
  • B. Remove the application from the ALCreate a Classic Load Balancer in its place. Direct traffic to the application using the HTTP protocol.
  • C. Alter the application code to inspect the X-Forwarded-For header. Ensure that the code can work properly if a list of IP addresses is passed in the header.
  • D. Alter the application code to inspect a custom header. Alter the client code to pass the IP address in the custom header.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Community vote distribution
C (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Claire_KMT
1 week, 2 days ago
C. Alter the application code to inspect the X-Forwarded-For header. Ensure that the code can work properly if a list of IP addresses is passed in the header.
upvoted 1 times
...
didorins
1 week, 3 days ago
Selected Answer: C
If you need to see external IP address and your app is behind ALB, always use x-forwarded-for https://docs.aws.amazon.com/elasticloadbalancing/latest/application/x-forwarded-headers.html
upvoted 1 times
...
Question #222 Topic 1

A developer is designing a serverless application that customers use to select seats for a concert venue. Customers send the ticket requests to an Amazon API Gateway API with an AWS Lambda function that acknowledges the order and generates an order ID. The application includes two additional Lambda functions: one for inventory management and one for payment processing. These two Lambda functions run in parallel and write the order to an Amazon Dynamo DB table.

The application must provide seats to customers according to the following requirements. If a seat is accidently sold more than once, the first order that the application received must get the seat. In these cases, the application must process the payment for only the first order. However, if the first order is rejected during payment processing, the second order must get the seat. In these cases, the application must process the payment for the second order.

Which solution will meet these requirements?

  • A. Send the order ID to an Amazon Simple Notification Service (Amazon SNS) FIFO topic that fans out to one Amazon Simple Queue Service (Amazon SQS) FIFO queue for inventory management and another SQS FIFO queue for payment processing.
  • B. Change the Lambda function that generates the order ID to initiate the Lambda function for inventory management. Then initiate the Lambda function for payment processing.
  • C. Send the order ID to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the Lambda functions for inventory management and payment processing to the topic.
  • D. Deliver the order ID to an Amazon Simple Queue Service (Amazon SQS) queue. Configure the Lambda functions for inventory management and payment processing to poll the queue.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (50%)
D (50%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MarkTpTTT55
3 days, 9 hours ago
Selected Answer: A
A. The only viable solution
upvoted 1 times
...
Claire_KMT
1 week ago
Selected Answer: D
D. Deliver the order ID to an Amazon Simple Queue Service (Amazon SQS) queue. Configure the Lambda functions for inventory management and payment processing to poll the queue.
upvoted 1 times
...
Claire_KMT
1 week, 2 days ago
D. Deliver the order ID to an Amazon Simple Queue Service (Amazon SQS) queue. Configure the Lambda functions for inventory management and payment processing to poll the queue.
upvoted 1 times
...
Question #223 Topic 1

An application uses AWS X-Ray to generate a large amount of trace data on an hourly basis. A developer wants to use filter expressions to limit the returned results through user-specified custom attributes.

How should the developer use filter expressions to filter the results in X-Ray?

  • A. Add custom attributes as annotations in the segment document.
  • B. Add custom attributes as metadata in the segment document.
  • C. Add custom attributes as new segment fields in the segment document.
  • D. Create new sampling rules that are based on custom attributes.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (50%)
B (50%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
PrakashM14
5 days, 23 hours ago
Selected Answer: A
To filter the results in AWS X-Ray using custom attributes, the developer should add custom attributes as annotations in the segment document.
upvoted 1 times
...
Claire_KMT
1 week ago
Selected Answer: B
B. Add custom attributes as metadata in the segment document. Custom attributes are best added as metadata in the segment document because X-Ray filter expressions can use metadata to filter traces. Annotations and new segment fields are not typically used for filtering traces in this context.
upvoted 1 times
...
Claire_KMT
1 week, 2 days ago
B. Add custom attributes as metadata in the segment document. Custom attributes are best added as metadata in the segment document because X-Ray filter expressions can use metadata to filter traces. Annotations and new segment fields are not typically used for filtering traces in this context.
upvoted 1 times
...
Question #224 Topic 1

A web application is using Amazon Kinesis Data Streams for clickstream data that may not be consumed for up to 12 hours.

How can the developer implement encryption at rest for data within the Kinesis Data Streams?

  • A. Enable SSL connections to Kinesis.
  • B. Use Amazon Kinesis Consumer Library.
  • C. Encrypt the data once it is at rest with a Lambda function.
  • D. Enable server-side encryption in Kinesis Data Streams.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Claire_KMT
1 week ago
Selected Answer: D
D. Enable server-side encryption in Kinesis Data Streams. Amazon Kinesis Data Streams allows you to enable server-side encryption, which encrypts data at rest. This ensures that data stored within the Kinesis Data Streams is protected with encryption.
upvoted 1 times
...
Claire_KMT
1 week, 2 days ago
D. Enable server-side encryption in Kinesis Data Streams. Amazon Kinesis Data Streams allows you to enable server-side encryption, which encrypts data at rest. This ensures that data stored within the Kinesis Data Streams is protected with encryption.
upvoted 1 times
...
didorins
1 week, 2 days ago
Selected Answer: D
https://docs.aws.amazon.com/streams/latest/dev/server-side-encryption.html
upvoted 1 times
...
Question #225 Topic 1

An application is real-time processing millions of events that are received through an API.

What service could be used to allow multiple consumers to process the data concurrently and MOST cost-effectively?

  • A. Amazon SNS with fanout to an SQS queue for each application
  • B. Amazon SNS with fanout to an SQS FIFO (first-in, first-out) queue for each application
  • C. Amazon Kinesis Firehose
  • D. Amazon Kinesis Data Streams
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Community vote distribution
D (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Claire_KMT
1 week, 2 days ago
D. Amazon Kinesis Data Streams. Amazon Kinesis Data Streams is designed for real-time data streaming and allows multiple consumers to process data concurrently and in real-time. It can handle millions of events and provides a scalable and cost-effective solution for handling high-throughput data streams.
upvoted 1 times
...
didorins
1 week, 2 days ago
Selected Answer: D
Real-time data processing is KDS
upvoted 1 times
...
Question #226 Topic 1

Given the following AWS CloudFormation template:



What is the MOST efficient way to reference the new Amazon S3 bucket from another AWS CloudFormation template?

  • A. Add an Export declaration to the Outputs section of the original template and use ImportValue in other templates.
  • B. Add Exported: true to the Content.Bucket in the original template and use ImportResource in other templates.
  • C. Create a custom AWS CloudFormation resource that gets the bucket name from the ContentBucket resource of the first stack.
  • D. Use Fn::Include to include the existing template in other templates and use the ContentBucket resource directly.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Claire_KMT
1 week, 2 days ago
A. Add an Export declaration to the Outputs section of the original template and use ImportValue in other templates.
upvoted 1 times
papason
1 week ago
By adding an Export declaration to the Outputs section of the original CloudFormation template, you can make the bucket name available for other templates to import and use. This allows you to reference the bucket name directly in other templates without the need for additional resources or custom logic.
upvoted 1 times
...
...
Question #227 Topic 1

A developer has built an application that inserts data into an Amazon DynamoDB table. The table is configured to use provisioned capacity. The application is deployed on a burstable nano Amazon EC2 instance. The application logs show that the application has been failing because of a ProvisionedThroughputExceededException error.

Which actions should the developer take to resolve this issue? (Choose two.)

  • A. Move the application to a larger EC2 instance.
  • B. Increase the number of read capacity units (RCUs) that are provisioned for the DynamoDB table.
  • C. Reduce the frequency of requests to DynamoDB by implementing exponential backoff.
  • D. Increase the frequency of requests to DynamoDB by decreasing the retry delay.
  • E. Change the capacity mode of the DynamoDB table from provisioned to on-demand.
Reveal Solution Hide Solution

Correct Answer: CE 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Claire_KMT
1 week, 2 days ago
B. Increase the number of read capacity units (RCUs) that are provisioned for the DynamoDB table. OR E. Change the capacity mode of the DynamoDB table from provisioned to on-demand. C. Reduce the frequency of requests to DynamoDB by implementing exponential backoff.
upvoted 1 times
tapan666
1 week, 2 days ago
It 'inserts' data, so it needs WCUs and not RCUs. So option B is invalid too. C and E are the correct options.
upvoted 3 times
...
...
Question #228 Topic 1

A company is hosting a workshop for external users and wants to share the reference documents with the external users for 7 days. The company stores the reference documents in an Amazon S3 bucket that the company owns.

What is the MOST secure way to share the documents with the external users?

  • A. Use S3 presigned URLs to share the documents with the external users. Set an expiration time of 7 days.
  • B. Move the documents to an Amazon WorkDocs folder. Share the links of the WorkDocs folder with the external users.
  • C. Create temporary IAM users that have read-only access to the S3 bucket. Share the access keys with the external users. Expire the credentials after 7 days.
  • D. Create a role that has read-only access to the S3 bucket. Share the Amazon Resource Name (ARN) of this role with the external users.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Community vote distribution
A (100%)

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Claire_KMT
1 week, 2 days ago
A. Use S3 presigned URLs to share the documents with the external users. Set an expiration time of 7 days.
upvoted 1 times
...
didorins
1 week, 2 days ago
Selected Answer: A
Temporary access to S3 object to external users is Pre-signed URL
upvoted 2 times
...
Question #229 Topic 1

A developer is planning to use an Amazon API Gateway and AWS Lambda to provide a REST API. The developer will have three distinct environments to manage: development, test, and production.

How should the application be deployed while minimizing the number of resources to manage?

  • A. Create a separate API Gateway and separate Lambda function for each environment in the same Region.
  • B. Assign a Region for each environment and deploy API Gateway and Lambda to each Region.
  • C. Create one API Gateway with multiple stages with one Lambda function with multiple aliases.
  • D. Create one API Gateway and one Lambda function, and use a REST parameter to identify the environment.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Claire_KMT
1 week, 2 days ago
C. Create one API Gateway with multiple stages with one Lambda function with multiple aliases.
upvoted 1 times
...
Question #230 Topic 1

A developer registered an AWS Lambda function as a target for an Application Load Balancer (ALB) using a CLI command. However, the Lambda function is not being invoked when the client sends requests through the ALB.

Why is the Lambda function not being invoked?

  • A. A Lambda function cannot be registered as a target for an ALB.
  • B. A Lambda function can be registered with an ALB using AWS Management Console only.
  • C. The permissions to invoke the Lambda function are missing.
  • D. Cross-zone is not enabled on the ALB.
Reveal Solution Hide Solution

Correct Answer: C 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Claire_KMT
1 week, 2 days ago
C. The permissions to invoke the Lambda function are missing.
upvoted 2 times
...
Question #231 Topic 1

A developer is creating an AWS Lambda function that will connect to an Amazon RDS for MySQL instance. The developer wants to store the database credentials. The database credentials need to be encrypted and the database password needs to be automatically rotated.

Which solution will meet these requirements?

  • A. Store the database credentials as environment variables for the Lambda function. Set the environment variables to rotate automatically.
  • B. Store the database credentials in AWS Secrets Manager. Set up managed rotation on the database credentials.
  • C. Store the database credentials in AWS Systems Manager Parameter Store as secure string parameters. Set up managed rotation on the parameters.
  • D. Store the database credentials in the X-Amz-Security-Token parameter. Set up managed rotation on the parameter.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Claire_KMT
1 week, 2 days ago
B. Store the database credentials in AWS Secrets Manager. Set up managed rotation on the database credentials.
upvoted 2 times
...
Question #232 Topic 1

A developer wants to reduce risk when deploying a new version of an existing AWS Lambda function. To test the Lambda function, the developer needs to split the traffic between the existing version and the new version of the Lambda function.

Which solution will meet these requirements?

  • A. Configure a weighted routing policy in Amazon Route 53. Associate the versions of the Lambda function with the weighted routing policy.
  • B. Create a function alias. Configure the alias to split the traffic between the two versions of the Lambda function.
  • C. Create an Application Load Balancer (ALB) that uses the Lambda function as a target. Configure the ALB to split the traffic between the two versions of the Lambda function.
  • D. Create the new version of the Lambda function as a Lambda layer on the existing version. Configure the function to split the traffic between the two layers.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Claire_KMT
1 week, 2 days ago
B. Create a function alias. Configure the alias to split the traffic between the two versions of the Lambda function.
upvoted 1 times
...
Question #233 Topic 1

A developer has created a large AWS Lambda function. Deployment of the function is failing because of an InvalidParameterValueException error. The error message indicates that the unzipped size of the function exceeds the maximum supported value.

Which actions can the developer take to resolve this error? (Choose two.)

  • A. Submit a quota increase request to AWS Support to increase the function to the required size.
  • B. Use a compression algorithm that is more efficient than ZIP.
  • C. Break up the function into multiple smaller functions.
  • D. Zip the .zip file twice to compress the file more.
  • E. Move common libraries, function dependencies, and custom runtimes into Lambda layers.
Reveal Solution Hide Solution

Correct Answer: CE 🗳️

Question #234 Topic 1

A developer is troubleshooting an application in an integration environment. In the application, an Amazon Simple Queue Service (Amazon SQS) queue consumes messages and then an AWS Lambda function processes the messages. The Lambda function transforms the messages and makes an API call to a third-party service.

There has been an increase in application usage. The third-party API frequently returns an HTTP 429 Too Many Requests error message. The error message prevents a significant number of messages from being processed successfully.

How can the developer resolve this issue?

  • A. Increase the SQS event source’s batch size setting.
  • B. Configure provisioned concurrency for the Lambda function based on the third-party API’s documented rate limits.
  • C. Increase the retry attempts and maximum event age in the Lambda function’s asynchronous configuration.
  • D. Configure maximum concurrency on the SQS event source based on the third-party service’s documented rate limits.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
PrakashM14
5 days, 22 hours ago
Selected Answer: B
Option B addresses the issue by configuring provisioned concurrency for the Lambda function. Provisioned concurrency ensures that a specified number of concurrent executions of the Lambda function are always available. This can help in managing the third-party API rate limits by controlling the number of simultaneous requests made to the API. By setting the provisioned concurrency to a value that aligns with the third-party API's rate limits, you can avoid exceeding those limits and reduce the occurrence of HTTP 429 errors.
upvoted 1 times
...
Jing2023
1 week, 2 days ago
Selected Answer: C
A. increase the batch size does not change how many items being processed. C is from Configuring error handling for asynchronous invocation — You can set it up when creating the lambda. Maximum age of event — The maximum amount of time Lambda retains an event in the asynchronous event queue, up to 6 hours. Retry attempts — The number of times Lambda retries when the function returns an error, between 0 and 2.
upvoted 1 times
...
Claire_KMT
1 week, 2 days ago
B. Configure provisioned concurrency for the Lambda function based on the third-party API’s documented rate limits.
upvoted 1 times
...
Question #235 Topic 1

A company has a three-tier application that is deployed in Amazon Elastic Container Service (Amazon ECS). The application is using an Amazon RDS for MySQL DB instance. The application performs more database reads than writes.

During times of peak usage, the application’s performance degrades. When this performance degradation occurs, the DB instance’s ReadLatency metric in Amazon CloudWatch increases suddenly.

How should a developer modify the application to improve performance?

  • A. Use Amazon ElastiCache to cache query results.
  • B. Scale the ECS cluster to contain more ECS instances.
  • C. Add read capacity units (RCUs) to the DB instance.
  • D. Modify the ECS task definition to increase the task memory.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Claire_KMT
1 week, 2 days ago
A. Use Amazon ElastiCache to cache query results.
upvoted 1 times
...
Question #236 Topic 1

A company has an online web application that includes a product catalog. The catalog is stored in an Amazon S3 bucket that is named DOC-EXAMPLE-BUCKET. The application must be able to list the objects in the S3 bucket and must be able to download objects through an IAM policy.

Which policy allows MINIMUM access to meet these requirements?

  • A.
  • B.
  • C.
  • D.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Claire_KMT
1 week, 2 days ago
A is the correct answer.
upvoted 2 times
...
Question #237 Topic 1

A developer is writing an application to encrypt files outside of AWS before uploading the files to an Amazon S3 bucket. The encryption must be symmetric and must be performed inside the application.

How can the developer implement the encryption in the application to meet these requirements?

  • A. Create a data key in AWS Key Management Service (AWS KMS). Use the AWS Encryption SDK to encrypt the files.
  • B. Create a Hash-Based Message Authentication Code (HMAC) key in AWS Key Management Service (AWS KMS). Use the AWS Encryption SDK to encrypt the files.
  • C. Create a data key pair in AWS Key Management Service (AWS KMS). Use the AWS CLI to encrypt the files.
  • D. Create a data key in AWS Key Management Service (AWS KMS). Use the AWS CLI to encrypt the files.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Jing2023
1 week, 2 days ago
Selected Answer: A
C and D cannot make it within the application.
upvoted 1 times
...
Claire_KMT
1 week, 2 days ago
A. Create a data key in AWS Key Management Service (AWS KMS). Use the AWS Encryption SDK to encrypt the files.
upvoted 2 times
...
Question #238 Topic 1

A developer is working on an application that is deployed on an Amazon EC2 instance. The developer needs a solution that will securely transfer files from the application to an Amazon S3 bucket.

What should the developer do to meet these requirements in the MOST secure way?

  • A. Create an IAM user. Create an access key for the IAM user. Store the access key in the application’s environment variables.
  • B. Create an IAM role. Create an access key for the IAM role. Store the access key in the application’s environment variables.
  • C. Create an IAM role. Configure the IAM role to access the specific Amazon S3 API calls the application requires. Associate the IAM role with the EC2 instance.
  • D. Configure an S3 bucket policy for the S3 bucket. Configure the S3 bucket policy to allow access for the EC2 instance ID.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
doubleh9324
5 days, 7 hours ago
Selected Answer: C
c!!!!!!!!!!!!!
upvoted 1 times
...
bammy
1 week ago
C is the correct answer
upvoted 1 times
...
Claire_KMT
1 week, 2 days ago
C. Create an IAM role. Configure the IAM role to access the specific Amazon S3 API calls the application requires. Associate the IAM role with the EC2 instance.
upvoted 3 times
...
didorins
1 week, 2 days ago
Selected Answer: C
Create role with required permissions. Attach it to IAM as instance profile.
upvoted 1 times
...
Question #239 Topic 1

A developer created a web API that receives requests by using an internet-facing Application Load Balancer (ALB) with an HTTPS listener. The developer configures an Amazon Cognito user pool and wants to ensure that every request to the API is authenticated through Amazon Cognito.

What should the developer do to meet this requirement?

  • A. Add a listener rule to the listener to return a fixed response if the Authorization header is missing. Set the fixed response to 401 Unauthorized.
  • B. Create an authentication action for the listener rules of the ALSet the rule action type to authenticate-cognito. Set the OnUnauthenticatedRequest field to “deny.”
  • C. Create an Amazon API Gateway API. Configure all API methods to be forwarded to the ALB endpoint. Create an authorizer of the COGNITO_USER_POOLS type. Configure every API method to use that authorizer.
  • D. Create a new target group that includes an AWS Lambda function target that validates the Authorization header by using Amazon Cognito. Associate the target group with the listener.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Claire_KMT
1 week, 2 days ago
B. Create an authentication action for the listener rules of the ALSet the rule action type to authenticate-cognito. Set the OnUnauthenticatedRequest field to “deny.”
upvoted 1 times
...
Question #240 Topic 1

A company recently deployed an AWS Lambda function. A developer notices an increase in the function throttle metrics in Amazon CloudWatch.

What are the MOST operationally efficient solutions to reduce the function throttling? (Choose two.)

  • A. Migrate the function to Amazon Elastic Kubernetes Service (Amazon EKS).
  • B. Increase the maximum age of events in Lambda.
  • C. Increase the function’s reserved concurrency.
  • D. Add the lambda:GetFunctionConcurrency action to the execution role.
  • E. Request a service quota change for increased concurrency.
Reveal Solution Hide Solution

Correct Answer: CE 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
oussa_ama
5 days, 23 hours ago
The correct answer is C&E.
upvoted 1 times
...
Claire_KMT
1 week, 2 days ago
C. Increase the function’s reserved concurrency: Reserved concurrency ensures that a specific number of concurrent executions are always available for your function. E. Request a service quota change for increased concurrency: If your application is experiencing throttling and the reserved concurrency isn't sufficient, you can request a service quota increase for additional concurrency.
upvoted 1 times
...
Question #241 Topic 1

A company is creating a REST service using an Amazon API Gateway with AWS Lambda integration. The service must run different versions for testing purposes.

What would be the BEST way to accomplish this?

  • A. Use an X-Version header to denote which version is being called and pass that header to the Lambda function(s).
  • B. Create an API Gateway Lambda authorizer to route API clients to the correct API version.
  • C. Create an API Gateway resource policy to isolate versions and provide context to the Lambda function(s).
  • D. Deploy the API versions as unique stages with unique endpoints and use stage variables to provide further context.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Claire_KMT
1 week, 2 days ago
D. Deploy the API versions as unique stages with unique endpoints and use stage variables to provide further context.
upvoted 1 times
...
Question #242 Topic 1

A company is using AWS CodePipeline to deliver one of its applications. The delivery pipeline is triggered by changes to the main branch of an AWS CodeCommit repository and uses AWS CodeBuild to implement the test and build stages of the process and AWS CodeDeploy to deploy the application.

The pipeline has been operating successfully for several months and there have been no modifications. Following a recent change to the application’s source code, AWS CodeDeploy has not deployed the updated application as expected.

What are the possible causes? (Choose two.)

  • A. The change was not made in the main branch of the AWS CodeCommit repository.
  • B. One of the earlier stages in the pipeline failed and the pipeline has terminated.
  • C. One of the Amazon EC2 instances in the company’s AWS CodePipeline cluster is inactive.
  • D. The AWS CodePipeline is incorrectly configured and is not invoking AWS CodeDeploy.
  • E. AWS CodePipeline does not have permissions to access AWS CodeCommit.
Reveal Solution Hide Solution

Correct Answer: AB 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
tapan666
1 week, 2 days ago
Selected Answer: AB
A. The change was not made in the main branch of the AWS CodeCommit repository: In this pipeline setup, if the change was made in a branch other than the main branch, it would not trigger the pipeline, and therefore, AWS CodeDeploy wouldn't deploy the updated application. B. One of the earlier stages in the pipeline failed and the pipeline has terminated: If one of the preceding stages in the pipeline failed, it would prevent the subsequent stages, including AWS CodeDeploy, from being executed.
upvoted 1 times
...
Claire_KMT
1 week, 2 days ago
B. One of the earlier stages in the pipeline failed and the pipeline has terminated. D. The AWS CodePipeline is incorrectly configured and is not invoking AWS CodeDeploy.
upvoted 1 times
...
Question #243 Topic 1

A developer is building a serverless application by using AWS Serverless Application Model (AWS SAM) on multiple AWS Lambda functions. When the application is deployed, the developer wants to shift 10% of the traffic to the new deployment of the application for the first 10 minutes after deployment. If there are no issues, all traffic must switch over to the new version.

Which change to the AWS SAM template will meet these requirements?

  • A. Set the Deployment Preference Type to Canary10Percent10Minutes. Set the AutoPublishAlias property to the Lambda alias.
  • B. Set the Deployment Preference Type to Linear10PercentEvery10Minutes. Set AutoPublishAlias property to the Lambda alias.
  • C. Set the Deployment Preference Type to Canary10Percent10Minutes. Set the PreTraffic and PostTraffic properties to the Lambda alias.
  • D. Set the Deployment Preference Type to Linear10PercentEvery10Minutes. Set PreTraffic and PostTraffic properties to the Lambda alias.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
NinjaCloud
2 days, 17 hours ago
Answer: A! Option B, which uses the "Linear" deployment type, gradually shifts traffic, and doesn't fully meet the requirement of immediately switching all traffic if there are no issues within the first 10 minutes.
upvoted 1 times
...
Claire_KMT
1 week, 2 days ago
A. Set the Deployment Preference Type to Canary10Percent10Minutes. Set the AutoPublishAlias property to the Lambda alias.
upvoted 1 times
...
didorins
1 week, 2 days ago
Selected Answer: C
C should be it. Shift traffic in two batches is Canary Validation is done with hooks https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/automating-updates-to-serverless-apps.html
upvoted 2 times
...
LemonGremlin
1 week, 3 days ago
Option C is the best choice for a canary deployment with the specific requirements mentioned in the scenario.
upvoted 2 times
...
Question #244 Topic 1

An AWS Lambda function is running in a company’s shared AWS account. The function needs to perform an additional ec2:DescribeInstances action that is directed at the company’s development accounts. A developer must configure the required permissions across the accounts.

How should the developer configure the permissions to adhere to the principle of least privilege?

  • A. Create an IAM role in the shared account. Add the ec2:DescribeInstances permission to the role. Establish a trust relationship between the development accounts for this role. Update the Lambda function IAM role in the shared account by adding the ec2:DescribeInstances permission to the role.
  • B. Create an IAM role in the development accounts. Add the ec2:DescribeInstances permission to the role. Establish a trust relationship with the shared account for this role. Update the Lambda function IAM role in the shared account by adding the iam:AssumeRole permissions.
  • C. Create an IAM role in the shared account. Add the ec2:DescribeInstances permission to the role. Establish a trust relationship between the development accounts for this role. Update the Lambda function IAM role in the shared account by adding the iam:AssumeRole permissions.
  • D. Create an IAM role in the development accounts. Add the ec2:DescribeInstances permission to the role. Establish a trust relationship with the shared account for this role. Update the Lambda function IAM role in the shared account by adding the ec2:DescribeInstances permission to the role.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
PrakashM14
5 days, 21 hours ago
Selected Answer: B
Create an IAM role in the development accounts. Add the ec2:DescribeInstances permission to the role. Establish a trust relationship with the shared account for this role. Update the Lambda function IAM role in the shared account by adding the iam:AssumeRole permissions.
upvoted 2 times
...
Kowsik_shashi
1 week, 1 day ago
Selected Answer: C
By using iam:AssumeRole, AWS allows you to implement the principle of least privilege, which means entities have only the permissions they require to perform specific tasks and nothing more.
upvoted 2 times
...
lbaker12
1 week, 2 days ago
Selected Answer: A
iam:AssumeRole doesn't exist it is sts:AssumeRole & creating IAM roles within development accounts is unnecessary work
upvoted 1 times
...
Claire_KMT
1 week, 2 days ago
B. Create an IAM role in the development accounts. Add the ec2:DescribeInstances permission to the role. Establish a trust relationship with the shared account for this role. Update the Lambda function IAM role in the shared account by adding the iam:AssumeRole permissions.
upvoted 1 times
...
didorins
1 week, 2 days ago
B To enable cross account AWS service actions, create role with required permissions in account which holds the resource. Enable trust relationship with account that will access the resource. Allow accessing account to assume the role.
upvoted 1 times
...
Question #245 Topic 1

A developer is building a new application that will be deployed on AWS. The developer has created an AWS CodeCommit repository for the application. The developer has initialized a new project for the application by invoking the AWS Cloud Development Kit (AWS CDK) cdk init command.

The developer must write unit tests for the infrastructure as code (IaC) templates that the AWS CDK generates. The developer also must run a validation tool across all constructs in the CDK application to ensure that critical security configurations are activated.

Which combination of actions will meet these requirements with the LEAST development overhead? (Choose two.)

  • A. Use a unit testing framework to write custom unit tests against the cdk.out file that the AWS CDK generates. Run the unit tests in a continuous integration and continuous delivery (CI/CD) pipeline that is invoked after any commit to the repository.
  • B. Use the CDK assertions module to integrate unit tests with the application. Run the unit tests in a continuous integration and continuous delivery (CI/CD) pipeline that is invoked after any commit to the repository.
  • C. Use the CDK runtime context to set key-value pairs that must be present in the cdk.out file that the AWS CDK generates. Fail the stack synthesis if any violations are present.
  • D. Write a script that searches the application for specific key configuration strings. Configure the script to produce a report of any security violations.
  • E. Use the CDK Aspects class to create custom rules to apply to the CDK application. Fall the stack synthesis if any violations are present.
Reveal Solution Hide Solution

Correct Answer: BE 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Claire_KMT
1 week, 2 days ago
B. Use the CDK assertions module to integrate unit tests with the application. Run the unit tests in a continuous integration and continuous delivery (CI/CD) pipeline that is invoked after any commit to the repository. E. Use the CDK Aspects class to create custom rules to apply to the CDK application. Fail the stack synthesis if any violations are present.
upvoted 1 times
...
Question #246 Topic 1

An online sales company is developing a serverless application that runs on AWS. The application uses an AWS Lambda function that calculates order success rates and stores the data in an Amazon DynamoDB table. A developer wants an efficient way to invoke the Lambda function every 15 minutes.

Which solution will meet this requirement with the LEAST development effort?

  • A. Create an Amazon EventBridge rule that has a rate expression that will run the rule every 15 minutes. Add the Lambda function as the target of the EventBridge rule.
  • B. Create an AWS Systems Manager document that has a script that will invoke the Lambda function on Amazon EC2. Use a Systems Manager Run Command task to run the shell script every 15 minutes.
  • C. Create an AWS Step Functions state machine. Configure the state machine to invoke the Lambda function execution role at a specified interval by using a Wait state. Set the interval to 15 minutes.
  • D. Provision a small Amazon EC2 instance. Set up a cron job that invokes the Lambda function every 15 minutes.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Claire_KMT
1 week, 2 days ago
A. Create an Amazon EventBridge rule that has a rate expression that will run the rule every 15 minutes. Add the Lambda function as the target of the EventBridge rule.
upvoted 1 times
...
didorins
1 week, 2 days ago
Selected Answer: A
Run Lambda as cron = Event Bridge
upvoted 1 times
...
LemonGremlin
1 week, 3 days ago
Selected Answer: A
option A is the most efficient and least development effort option for invoking the Lambda function every 15 minutes, as it leverages Amazon EventBridge's built-in scheduling capabilities and is fully serverless.
upvoted 1 times
...
Question #247 Topic 1

A company deploys a photo-processing application to an Amazon EC2 instance. The application needs to process each photo in less than 5 seconds. If processing takes longer than 5 seconds, the company’s development team must receive a notification.

How can a developer implement the required time measurement and notification with the LEAST operational overhead?

  • A. Create an Amazon CloudWatch custom metric. Each time a photo is processed, publish the processing time as a metric value. Create a CloudWatch alarm that is based on a static threshold of 5 seconds. Notify the development team by using an Amazon Simple Notification Service (Amazon SNS) topic.
  • B. Create an Amazon Simple Queue Service (Amazon SQS) queue. Each time a photo is processed, publish the processing time to the queue. Create an application to consume from the queue and to determine whether any values are more than 5 seconds. Notify the development team by using an Amazon Simple Notification Service (Amazon SNS) topic.
  • C. Create an Amazon CloudWatch custom metric. Each time a photo is processed, publish the processing time as a metric value. Create a CloudWatch alarm that enters ALARM state if the average of values is greater than 5 seconds. Notify the development team by sending an Amazon Simple Email Service (Amazon SES) message.
  • D. Create an Amazon Kinesis data stream. Each time a photo is processed, publish the processing time to the data stream. Create an Amazon CloudWatch alarm that enters ALARM state if any values are more than 5 seconds. Notify the development team by using an Amazon Simple Notification Service (Amazon SNS) topic.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
tapan666
1 week, 2 days ago
Selected Answer: A
https://www.examtopics.com/discussions/amazon/view/88805-exam-aws-certified-developer-associate-topic-1-question-263/
upvoted 1 times
...
Claire_KMT
1 week, 2 days ago
A. Create an Amazon CloudWatch custom metric. Each time a photo is processed, publish the processing time as a metric value. Create a CloudWatch alarm that is based on a static threshold of 5 seconds. Notify the development team by using an Amazon Simple Notification Service (Amazon SNS) topic.
upvoted 1 times
...
Question #248 Topic 1

A company is using AWS Elastic Beanstalk to manage web applications that are running on Amazon EC2 instances. A developer needs to make configuration changes. The developer must deploy the changes to new instances only.

Which types of deployment can the developer use to meet this requirement? (Choose two.)

  • A. All at once
  • B. Immutable
  • C. Rolling
  • D. Blue/green
  • E. Rolling with additional batch
Reveal Solution Hide Solution

Correct Answer: BD 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
tapan666
1 week, 2 days ago
Selected Answer: BD
https://www.examtopics.com/discussions/amazon/view/88855-exam-aws-certified-developer-associate-topic-1-question-289/
upvoted 2 times
...
Claire_KMT
1 week, 2 days ago
B. Immutable D. Blue/green
upvoted 1 times
...
Question #249 Topic 1

A developer needs to use Amazon DynamoDB to store customer orders. The developer’s company requires all customer data to be encrypted at rest with a key that the company generates.

What should the developer do to meet these requirements?

  • A. Create the DynamoDB table with encryption set to None. Code the application to use the key to decrypt the data when the application reads from the table. Code the application to use the key to encrypt the data when the application writes to the table.
  • B. Store the key by using AWS Key Management Service (AWS KMS). Choose an AWS KMS customer managed key during creation of the DynamoDB table. Provide the Amazon Resource Name (ARN) of the AWS KMS key.
  • C. Store the key by using AWS Key Management Service (AWS KMS). Create the DynamoDB table with default encryption. Include the kms:Encrypt parameter with the Amazon Resource Name (ARN) of the AWS KMS key when using the DynamoDB software development kit (SDK).
  • D. Store the key by using AWS Key Management Service (AWS KMS). Choose an AWS KMS AWS managed key during creation of the DynamoDB table. Provide the Amazon Resource Name (ARN) of the AWS KMS key.
Reveal Solution Hide Solution

Correct Answer: B 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
tapan666
1 week, 2 days ago
Selected Answer: B
https://www.examtopics.com/discussions/amazon/view/78943-exam-aws-certified-developer-associate-topic-1-question-23/
upvoted 1 times
...
Claire_KMT
1 week, 2 days ago
B. Store the key by using AWS Key Management Service (AWS KMS). Choose an AWS KMS customer managed key during the creation of the DynamoDB table. Provide the Amazon Resource Name (ARN) of the AWS KMS key.
upvoted 1 times
...
Question #250 Topic 1

A company uses AWS CloudFormation to deploy an application that uses an Amazon API Gateway REST API with AWS Lambda function integration. The application uses Amazon DynamoDB for data persistence. The application has three stages: development, testing, and production. Each stage uses its own DynamoDB table.

The company has encountered unexpected issues when promoting changes to the production stage. The changes were successful in the development and testing stages. A developer needs to route 20% of the traffic to the new production stage API with the next production release. The developer needs to route the remaining 80% of the traffic to the existing production stage. The solution must minimize the number of errors that any single customer experiences.

Which approach should the developer take to meet these requirements?

  • A. Update 20% of the planned changes to the production stage. Deploy the new production stage. Monitor the results. Repeat this process five times to test all planned changes.
  • B. Update the Amazon Route 53 DNS record entry for the production stage API to use a weighted routing policy. Set the weight to a value of 80. Add a second record for the production domain name. Change the second routing policy to a weighted routing policy. Set the weight of the second policy to a value of 20. Change the alias of the second policy to use the testing stage API.
  • C. Deploy an Application Load Balancer (ALB) in front of the REST API. Change the production API Amazon Route 53 record to point traffic to the ALB. Register the production and testing stages as targets of the ALB with weights of 80% and 20%, respectively.
  • D. Configure canary settings for the production stage API. Change the percentage of traffic directed to canary deployment to 20%. Make the planned updates to the production stage. Deploy the changes
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Claire_KMT
1 week, 2 days ago
D. Configure canary settings for the production stage API. Change the percentage of traffic directed to canary deployment to 20%. Make the planned updates to the production stage. Deploy the changes
upvoted 1 times
...
Question #251 Topic 1

A developer has created a data collection application that uses Amazon API Gateway, AWS Lambda, and Amazon S3. The application’s users periodically upload data files and wait for the validation status to be reflected on a processing dashboard. The validation process is complex and time-consuming for large files.

Some users are uploading dozens of large files and have to wait and refresh the processing dashboard to see if the files have been validated. The developer must refactor the application to immediately update the validation result on the user’s dashboard without reloading the full dashboard.

What is the MOST operationally efficient solution that meets these requirements?

  • A. Integrate the client with an API Gateway WebSocket API. Save the user-uploaded files with the WebSocket connection ID. Push the validation status to the connection ID when the processing is complete to initiate an update of the user interface.
  • B. Launch an Amazon EC2 micro instance, and set up a WebSocket server. Send the user-uploaded file and user detail to the EC2 instance after the user uploads the file. Use the WebSocket server to send updates to the user interface when the uploaded file is processed.
  • C. Save the user’s email address along with the user-uploaded file. When the validation process is complete, send an email notification through Amazon Simple Notification Service (Amazon SNS) to the user who uploaded the file.
  • D. Save the user-uploaded file and user detail to Amazon DynamoDB. Use Amazon DynamoDB Streams with Amazon Simple Notification Service (Amazon SNS) push notifications to send updates to the browser to update the user interface.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
PrakashM14
5 days, 21 hours ago
Selected Answer: A
Option B involves setting up a WebSocket server on an EC2 instance, which is more manual and may require additional management overhead. Option C relies on email notifications, which might introduce delays and may not provide the desired real-time updates. Option D involves DynamoDB and SNS, which may add complexity without the direct support for real-time updates that WebSocket provides. So, Option A
upvoted 1 times
...
tapan666
1 week, 2 days ago
Selected Answer: D
Option C could work for notifying users, it doesn't provide immediate updates on the user's dashboard. Users would need to check their email to see the validation status, which may not be as user-friendly as real-time updates on the dashboard. It adds complexity with email notifications and may result in longer delays before users see the validation results. Option D (using DynamoDB Streams and Amazon SNS) is preferred because it offers a more operationally efficient and real-time solution without the need for WebSocket management, email notifications, or a constantly running EC2 instance. It provides immediate updates on the user's dashboard while keeping operational complexity and costs to a minimum.
upvoted 1 times
...
Claire_KMT
1 week, 2 days ago
B. Launch an Amazon EC2 micro instance, and set up a WebSocket server. Send the user-uploaded file and user detail to the EC2 instance after the user uploads the file. Use the WebSocket server to send updates to the user interface when the uploaded file is processed. OR D. Save the user-uploaded file and user detail to Amazon DynamoDB. Use Amazon DynamoDB Streams with Amazon Simple Notification Service (Amazon SNS) push notifications to send updates to the browser to update the user interface.
upvoted 1 times
...
Question #252 Topic 1

A company’s developer is creating an application that uses Amazon API Gateway. The company wants to ensure that only users in the Sales department can use the application. The users authenticate to the application by using federated credentials from a third-party identity provider (IdP) through Amazon Cognito. The developer has set up an attribute mapping to map an attribute that is named Department and to pass the attribute to a custom AWS Lambda authorizer.

To test the access limitation, the developer sets their department to Engineering in the IdP and attempts to log in to the application. The developer is denied access. The developer then updates their department to Sales in the IdP and attempts to log in. Again, the developer is denied access. The developer checks the logs and discovers that access is being denied because the developer’s access token has a department value of Engineering.

Which of the following is a possible reason that the developer’s department is still being reported as Engineering instead of Sales?

  • A. Authorization caching is enabled in the custom Lambda authorizer.
  • B. Authorization caching is enabled on the Amazon Cognito user pool.
  • C. The IAM role for the custom Lambda authorizer does not have a Department tag.
  • D. The IAM role for the Amazon Cognito user pool does not have a Department tag.
Reveal Solution Hide Solution

Correct Answer: A 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
PrakashM14
5 days, 21 hours ago
Selected Answer: B
Options A, C, and D do not directly address the caching of user attributes in the context of Amazon Cognito. Option A refers to caching in the custom Lambda authorizer, but the issue seems more likely to be related to the Cognito user pool's caching mechanism. Options C and D mention IAM roles and tags, which may be relevant for other aspects of access control but are not the primary cause of the reported department value in this scenario.
upvoted 1 times
...
tapan666
1 week, 2 days ago
Selected Answer: A
https://www.examtopics.com/discussions/amazon/view/88914-exam-aws-certified-developer-associate-topic-1-question-294/
upvoted 3 times
...
Claire_KMT
1 week, 2 days ago
B. Authorization caching is enabled on the Amazon Cognito user pool.
upvoted 1 times
...
Question #253 Topic 1

A company has migrated an application to Amazon EC2 instances. Automatic scaling is working well for the application user interface. However, the process to deliver shipping requests to the company’s warehouse staff is encountering issues. Duplicate shipping requests are arriving, and some requests are lost or arrive out of order.

The company must avoid duplicate shipping requests and must process the requests in the order that the requests arrive. Requests are never more than 250 KB in size and take 5-10 minutes to process. A developer needs to rearchitect the application to improve the reliability of the delivery and processing of the requests.

What should the developer do to meet these requirements?

  • A. Create an Amazon Kinesis Data Firehose delivery stream to process the requests. Create an Amazon Kinesis data stream. Modify the application to write the requests to the Kinesis data stream.
  • B. Create an AWS Lambda function to process the requests. Create an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the Lambda function to the SNS topic. Modify the application to write the requests to the SNS topic.
  • C. Create an AWS Lambda function to process the requests. Create an Amazon Simple Queue Service (Amazon SQS) standard queue. Set the SQS queue as an event source for the Lambda function. Modify the application to write the requests to the SQS queue.
  • D. Create an AWS Lambda function to process the requests. Create an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Set the SQS queue as an event source for the Lambda function. Modify the application to write the requests to the SQS queue.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
tapan666
1 week, 2 days ago
Selected Answer: D
https://www.examtopics.com/discussions/amazon/view/88667-exam-aws-certified-developer-associate-topic-1-question-209/
upvoted 1 times
...
Claire_KMT
1 week, 2 days ago
D. Create an AWS Lambda function to process the requests. Create an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Set the SQS queue as an event source for the Lambda function. Modify the application to write the requests to the SQS queue.
upvoted 1 times
...
Question #254 Topic 1

A developer is creating a machine learning (ML) pipeline in AWS Step Functions that contains AWS Lambda functions. The developer has configured an Amazon Simple Queue Service (Amazon SQS) queue to deliver ML model parameters to the ML pipeline to train ML models. The developer uploads the trained models are uploaded to an Amazon S3 bucket.

The developer needs a solution that can locally test the ML pipeline without making service integration calls to Amazon SQS and Amazon S3.

Which solution will meet these requirements?

  • A. Use the Amazon CodeGuru Profiler to analyze the Lambda functions used in the AWS Step Functions pipeline.
  • B. Use the AWS Step Functions Local Docker Image to run and locally test the Lambda functions.
  • C. Use the AWS Serverless Application Model (AWS SAM) CLI to run and locally test the Lambda functions.
  • D. Use AWS Step Functions Local with mocked service integrations.
Reveal Solution Hide Solution

Correct Answer: D 🗳️

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
tapan666
1 week, 2 days ago
Selected Answer: D
D. Use AWS Step Functions Local with mocked service integrations. Hide Solution
upvoted 1 times
...
Claire_KMT
1 week, 2 days ago
D. Use AWS Step Functions Local with mocked service integrations.
upvoted 1 times
...